- Overview
- REST API Documentations
- Basics
- Advanced features
- Example: Pet Store
- SOAP API Documentations
- GraphQL Documentation
- WebSocket Documentations
- SSE API Documentations
- gRPC API Documentations
- How to Use Apidog for gRPC API Documentation and Testing
- Example: Proto Documentation
Messages
POST
/v1/messages
The Messages API can be used for either single queries or stateless multi-turn conversations.
Request Request Example
Shell
JavaScript
Java
Swift
curl --location --request POST '/v1/messages' \
--header 'anthropic-beta;' \
--header 'anthropic-version: 2023-06-01' \
--header 'x-api-key: $ANTHROPIC_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "claude-3-7-sonnet-20250219",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "Hello, world"
}
]
}'
Response Response Example
200 - Example 1
{
"content": [
{
"text": "Hi! My name is Claude.",
"type": "text"
}
],
"id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
"model": "claude-3-7-sonnet-20250219",
"role": "assistant",
"stop_reason": "end_turn",
"stop_sequence": null,
"type": "message",
"usage": {
"input_tokens": 2095,
"output_tokens": 503
}
}
Request
Header Params
anthropic-beta
string
optional
beta1,beta2
or specify the header multiple times for each beta.anthropic-version
string
required
Example:
2023-06-01
x-api-key
string
required
Example:
$ANTHROPIC_API_KEY
Body Params application/json
max_tokens
integer
required
> 1
messages
object
required
user
and assistant
conversational turns. When creating a new Message
, you specify the prior conversational turns with the messages
parameter, and the model then generates the next Message
in the conversation. Consecutive user
or assistant
turns in your request will be combined into a single turn.role
and content
. You can specify a single user
-role message, or you can include multiple user
and assistant
messages.assistant
role, the response content will continue immediately from the content in that message. This can be used to constrain part of the model's response.user
message:[{"role": "user", "content": "Hello, Claude"}]
[
{"role": "user", "content": "Hello there."},
{"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"},
{"role": "user", "content": "Can you explain LLMs in plain English?"},
]
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("},
]
content
may be either a single string
or an array of content blocks, where each block has a specific type
. Using a string
for content
is shorthand for an array of one content block of type "text"
. The following input messages are equivalent:{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
{"role": "user", "content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "/9j/4AAQSkZJRg...",
}
},
{"type": "text", "text": "What is in this image?"}
]}
base64
source type for images, and the image/jpeg
, image/png
, image/gif
, and image/webp
media types.system
parameter — there is no "system"
role for input messages in the Messages API.content
string
required
role
enum<string>
required
Allowed values:
userassistant
model
string
required
>= 1 characters<= 256 characters
metadata
object
required
user_id
string | null
required
<= 256 characters
stop_sequences
string
required
stop_reason
of "end_turn"
.stop_sequences
parameter. If the model encounters one of the custom sequences, the response stop_reason
value will be "stop_sequence"
and the response stop_sequence
value will contain the matched stop sequence.stream
boolean
required
system
string
required
temperature
number
required
1.0
. Ranges from 0.0
to 1.0
. Use temperature
closer to 0.0
for analytical / multiple choice, and closer to 1.0
for creative and generative tasks.temperature
of 0.0
, the results will not be fully deterministic.> 0< 1
thinking
required
thinking
content blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your max_tokens
limit.One of
budget_tokens
integer
required
max_tokens
.> 1024
type
enum<string>
required
Allowed value:
enabled
tool_choice
required
One of
type
enum<string>
required
Allowed value:
auto
disable_parallel_tool_use
boolean
required
false
. If set to true
, the model will output at most one tool use.tools
object
required
tools
in your API request, the model may return tool_use
content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result
content blocks.name
: Name of the tool.description
: Optional, but strongly-recommended description of the tool.input_schema
: JSON schema for the tool input
shape that the model will produce in tool_use
output content blocks.tools
as:[
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
]
tool_use
content blocks in the response like this:[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "get_stock_price",
"input": { "ticker": "^GSPC" }
}
]
get_stock_price
tool with {"ticker": "^GSPC"}
as an input, and return the following back to the model in a subsequent user
message:[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "259.75 USD"
}
]
top_k
integer
required
temperature
.> 0
top_p
number
required
top_p
. You should either alter temperature
or top_p
, but not both.temperature
.> 0< 1
Examples
Responses
🟢200Success
application/json
Body
content
required
type
that determines its shape.[{"type": "text", "text": "Hi, I'm Claude."}]
messages
ended with an assistant
turn, then the response content
will continue directly from that last turn. You can use this to constrain the model's output.messages
were:[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
content
might be:[{"type": "text", "text": "B)"}]
One of
citations
object | null
required
page_location
, plain text results in char_location
, and content document results in content_block_location
.text
string
required
<= 5000000 characters
type
enum<string>
required
Allowed value:
text
Default:
text
id
string
required
model
string
required
>= 1 characters<= 256 characters
role
enum<string>
required
"assistant"
.Allowed value:
assistant
Default:
assistant
stop_reason
enum<string>
required
In non-streaming mode this value is always non-null. In streaming mode, it is null in the
message_start
event and non-null otherwise.Allowed values:
end_turnmax_tokensstop_sequencetool_use
stop_sequence
string | null
required
type
enum<string>
required
"message"
.Allowed value:
message
Default:
message
usage
object
required
usage
will not match one-to-one with the exact visible content of an API request or response.output_tokens
will be non-zero, even for an empty string response from Claude.input_tokens
, cache_creation_input_tokens
, and cache_read_input_tokens
.cache_creation_input_tokens
integer | null
required
> 0
cache_read_input_tokens
integer | null
required
> 0
input_tokens
integer
required
> 0
output_tokens
integer
required
> 0
🟠400Bad Request
Modified at 2025-03-25 05:47:01