type
status
date
slug
summary
tags
category
password
icon
Author
Abstract
Intro
In this blog, I will share some notes and thoughts about learning the Anthropic Prompt Tutorial. Here is the link to the tutorial.
Notes
01-Basic Prompt Structure
There are two types of APIs provided by Anthropic:
- Text Completions API
- Messages API
Usually, we use the
Messages
API for its flexibility. There are several parameters in the Messages
API:model
: the model to use.claude-3-5-sonnet-20240620
is recommended for most use cases if you do not care the cost.
messages
: the messages to send to the model.
max_tokens
: the maximum number of tokens to generate.
temperature
: the temperature of the model. The low temperature is preferred for generating more consistent and predictable responses and the high temperature is preferred for generating more creative and diverse responses.
system
: the system message is used to set the behavior of the model. It is recommended to provide a detailed description of the model’s behavior and capabilities.
For a complete list of all parameters, please refer to the official documentation.
A standard function to call the
Messages
API is as follows:02-Being Clear and Direct
The first important principle for prompt design is to be clear and direct. Claude reads the prompt and tries to follow it. If the prompt is not clear, the model may not behave as expected. When in doubt, follow the golden rule of clear prompting:
* Show your prompt to a colleague or friend and have them follow the instructions themselves to see if they can produce the result you want. If they’re confused, Claude’s confused.
There are some examples in the tutorial:
- “Who is the best basketball player of all time?” is a bad prompt because it is not clear what criteria is used to determine the best basketball player of all time. So we need to tell claude to have to pick one player to avoid it to give several names.
- “Write a haiku about robots” may cause a problem that claude does not go straight into a poem. So we need to tell claude to go straight without preamble.
03-Assigning Roles(Role Prompting)
Assigning a specific role to claude is important for getting the output you want. Role prompting is a technique that can help change the style, tone, and manner of claude’s response.
There are some examples in the tutorial, Following is the one:
- The original prompt is “In one sentence, what do you think about skateboarding?” How about adding a role to the system prompt:“You are a cat.” I think there are huge differences between the two outputs.
04-Separating Data and Instructions
Usually, we want claude to do the same thing for every time but the data uses for each task is different. So, we need to set a prompt template and insert a placeholder in prompt template. Following is an example to illustrate this in the tutorial:
As observed, we can see that the word “Cow” replaces the
ANIMAL
in the prompt template. It is important to note that we should make claude clarify where the variable begins and ends. Here is an example to illustrate why this is very important:Here, claude thinks “Yo Claude” is also a part that needed to be rewritten. So that’s the problem we need to avoid. How? Using XML tags can solve.
05-Formatting Output and Speaking for Claude
XML tags not only makes the prompt clearer and more parseable to Claude, but it also helps Claude to generate the output more clearer and more easily understandable to humans.
Claude is also good at outputing
JSON
format style. Just prompting Claude to output in JSON
format(not deterministically but close to it) is enough.06-Precognition(Thinking Step by Step)
Giving claude time to think step by step is important for improving the output quality, particularly for complex reasoning tasks which requires multiple steps.
Following is an example in the tutorial:
We can see that the second prompt requires claude to brainstorm first and then give the final answer.
07-Using Examples(Few-Shot Prompting)
Few-shot prompting is an effective technique for help claude get the right answer or gettting the answer in right format. The number of “shots” refer to how many examples are used within the prompt.
Following is an example in the tutorial:
In this example, it is also useful to describe the tone you want Claude to use. However, giving an example to Claude is much easier than describing the tone.
08-Avoiding Hallucinations
Hallucinations are errors in which Claude makes up information that is not present in the training data. It is important to avoid hallucinations because they can be very misleading and confusing. There exists some techniques to avoid hallucinations:
- Giving claude the option to say it does not know the answer to a question.
- Asking Claude to find evidence before answering.
An easy example is as follows:
Sometimes, we can set the parameter
temperature
to a lower value to reduce the risk of hallucinations. Temperature
is a measurement of answer creativity between 0 and 1, with 1 being more unpredictable and less standardized, and 0 being the most consistent.Conclusion
In this blog, I quickly took some short notes about the Anthropic Prompt Tutorial. There are still many techniques that the tutorial did not cover, such as self-consistency and tree of thought prompting and so on. I highly recommend you to check out the tutorial for more details.
- Author:Chengsheng Deng
- URL:https://chengshengddeng.com/article/notes-on-anthropic-prompt-tutorial
- Copyright:All articles in this blog, except for special statements, adopt BY-NC-SA agreement. Please indicate the source!
Relate Posts