最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

catalystbyzoho - I want to bulk read more than 400,000 records from the Catalyst Datastore - Stack Overflow

programmeradmin0浏览0评论

I have more than 400,000 records in the Catalyst Datastore and need to read the data to generate a CSV file containing the results of the read operation.

I tried using the bulk read method in the Node.js SDK, but it only retrieves up to 200,000 records. How can I read all the tables in the Catalyst Datastore?

I have more than 400,000 records in the Catalyst Datastore and need to read the data to generate a CSV file containing the results of the read operation.

I tried using the bulk read method in the Node.js SDK, but it only retrieves up to 200,000 records. How can I read all the tables in the Catalyst Datastore?

Share Improve this question edited Feb 5 at 20:41 Tangentially Perpendicular 5,3584 gold badges14 silver badges33 bronze badges asked Feb 5 at 19:38 Mahesh VaradhanMahesh Varadhan 1 3
  • Have you used the LIMIT clause to try to fetch it in two chunks? – Tim Roberts Commented Feb 5 at 19:57
  • Lakh (and crore) are not widely understood outside India. Please use common English terms. – Tangentially Perpendicular Commented Feb 5 at 20:38
  • Please provide enough code so others can better understand or reproduce the problem. – Community Bot Commented Feb 6 at 1:17
Add a comment  | 

1 Answer 1

Reset to default 0

I'm currently working on a similar requirement to bulk read more than 3 lakh (300,000) records from the Catalyst Datastore and generate a CSV file. However, it appears that a single bulk read request can retrieve only 2 lakh (200,000) records at a time.

To overcome this limitation, a workaround is to execute multiple bulk read requests iteratively, where each request fetches 2 lakh records. By combining the results from multiple requests, we can successfully retrieve all the required records and generate the complete CSV file.

To fetch the next set of records, you need to pass the page key in the bulk read API request. For example, setting page: 1 will retrieve the first 2 lakh records, while page: 2 will fetch the next set, and so on.

You can refer their official help documentation here

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论