最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

amazon web services - Write capacity exceeded when updating with batches - Stack Overflow

programmeradmin1浏览0评论

I have a lambda function (python) where I update existing dynamodb items using PartiQL. The update is executed in batches of 5, and I use the standard retry mode with 5 attempts:

from botocore.config import Config

 config = Config(
   retries = {
    'max_attempts': 5,
    'mode': 'standard'
  }
 )

In addition, I add a random sleep time of 20ms to 60ms between batches. Lambda is executed with a maximal number of 200 records each time.

I see that despite the retry mode and the random sleep time, I sometimes get this error:

The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.

The reason is not clear, because when I look at the monitoring tab of the dynamodb table, I can see that the write usage is well below the provisioning (image attached).

What can I do to solve it? Thanks.

I have a lambda function (python) where I update existing dynamodb items using PartiQL. The update is executed in batches of 5, and I use the standard retry mode with 5 attempts:

from botocore.config import Config

 config = Config(
   retries = {
    'max_attempts': 5,
    'mode': 'standard'
  }
 )

In addition, I add a random sleep time of 20ms to 60ms between batches. Lambda is executed with a maximal number of 200 records each time.

I see that despite the retry mode and the random sleep time, I sometimes get this error:

The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.

The reason is not clear, because when I look at the monitoring tab of the dynamodb table, I can see that the write usage is well below the provisioning (image attached).

What can I do to solve it? Thanks.

Share Improve this question edited Mar 13 at 9:41 DarkBee 15.5k8 gold badges72 silver badges118 bronze badges asked Mar 13 at 9:10 Mister_LMister_L 2,6119 gold badges37 silver badges68 bronze badges 2
  • 1 Seems like the graph shows averaage consumed capacity, not maximum. – gshpychka Commented Mar 14 at 14:24
  • Do you get the same error if you change the billing type of the dynomoDB table to non-provisioned? – shantanuo Commented Mar 18 at 7:11
Add a comment  | 

1 Answer 1

Reset to default 0

The error is clear, you exceed your provisioned capacity. The CloudWatch graph is misleading as it only shows average consumption for a 1 minute period, but DynamoDB counts on a per second basis. Such that if you made 5000 requests in 1-3 seconds but nothing for the remainder of the minute, you'd get a much lower data point on CloudWatch.

To overcome this throttling, you can either scale your min autoscaling to a higher value, or more effective, use on-demand mode which scales instantly.

发布评论

评论列表(0)

  1. 暂无评论