最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

amazon web services - anthropic.claude-3-haiku-20240307-v1:0 with on-demand throughput isn’t supported. Retry your request with

programmeradmin3浏览0评论

When calling *anthropic.claude-3-haiku-20240307-v1:0* in AWS Bedrock, I encountered the following error:

ERROR: Can't invoke 'anthropic.claude-3-haiku-20240307-v1:0'.
Reason: An error occurred (ValidationException) when calling 
the InvokeModel operation: Invocation of model ID anthropic.claude-3-haiku-20240307-v1:0 with on-demand throughput isn’t supported. 
Retry your request with the ID or ARN of an inference profile that contains this model.
1

Here is the example code from AWS documentation .html:

# Use the native inference API to send a text message to Anthropic Claude.

import boto3
import json

from botocore.exceptions import ClientError

# Create a Bedrock Runtime client in the AWS Region of your choice.
client = boto3.client("bedrock-runtime", region_name="us-east-2")

# Set the model ID, e.g., Claude 3 Haiku.
model_id = "anthropic.claude-3-haiku-20240307-v1:0"

# Define the prompt for the model.
prompt = "Describe the purpose of a 'hello world' program in one line."

# Format the request payload using the model's native structure.
native_request = {
    "anthropic_version": "bedrock-2023-05-31",
    "max_tokens": 512,
    "temperature": 0.5,
    "messages": [
        {
            "role": "user",
            "content": [{"type": "text", "text": prompt}],
        }
    ],
}

# Convert the native request to JSON.
request = json.dumps(native_request)

try:
    # Invoke the model with the request.
    response = client.invoke_model(modelId=model_id, body=request)

except (ClientError, Exception) as e:
    print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}")
    exit(1)

print(response)
# Decode the response body.
model_response = json.loads(response["body"].read())

# Extract and print the response text.
response_text = model_response["content"][0]["text"]
print(response_text)

When calling *anthropic.claude-3-haiku-20240307-v1:0* in AWS Bedrock, I encountered the following error:

ERROR: Can't invoke 'anthropic.claude-3-haiku-20240307-v1:0'.
Reason: An error occurred (ValidationException) when calling 
the InvokeModel operation: Invocation of model ID anthropic.claude-3-haiku-20240307-v1:0 with on-demand throughput isn’t supported. 
Retry your request with the ID or ARN of an inference profile that contains this model.
1

Here is the example code from AWS documentation https://docs.aws.amazon/bedrock/latest/userguide/bedrock-runtime_example_bedrock-runtime_InvokeModel_AnthropicClaude_section.html:

# Use the native inference API to send a text message to Anthropic Claude.

import boto3
import json

from botocore.exceptions import ClientError

# Create a Bedrock Runtime client in the AWS Region of your choice.
client = boto3.client("bedrock-runtime", region_name="us-east-2")

# Set the model ID, e.g., Claude 3 Haiku.
model_id = "anthropic.claude-3-haiku-20240307-v1:0"

# Define the prompt for the model.
prompt = "Describe the purpose of a 'hello world' program in one line."

# Format the request payload using the model's native structure.
native_request = {
    "anthropic_version": "bedrock-2023-05-31",
    "max_tokens": 512,
    "temperature": 0.5,
    "messages": [
        {
            "role": "user",
            "content": [{"type": "text", "text": prompt}],
        }
    ],
}

# Convert the native request to JSON.
request = json.dumps(native_request)

try:
    # Invoke the model with the request.
    response = client.invoke_model(modelId=model_id, body=request)

except (ClientError, Exception) as e:
    print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}")
    exit(1)

print(response)
# Decode the response body.
model_response = json.loads(response["body"].read())

# Extract and print the response text.
response_text = model_response["content"][0]["text"]
print(response_text)
Share Improve this question edited Mar 15 at 1:51 Hilory 2,1377 gold badges14 silver badges30 bronze badges asked Mar 14 at 23:26 xsqianxsqian 2791 gold badge8 silver badges15 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

As per the official documentation claude haiku 3.0 is available as cross region inference.

Change the model id to as below

# Set the model ID, e.g., Claude 3 Haiku.
model_id = "us.anthropic.claude-3-haiku-20240307-v1:0"

https://docs.aws.amazon/bedrock/latest/userguide/models-supported.html

https://docs.aws.amazon/bedrock/latest/userguide/inference-profiles-support.html

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论