最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - How can I work with larger than memory Snowflake datasets in polars? - Stack Overflow

programmeradmin0浏览0评论

I'm trying to work with data stored in a Snowflake database using polars in python. I see I can access the data with pl.read_database_uri with the adbc engine. I was wondering how I can do this efficiently for larger-than-memory datasets.

  • Is it possible to stream the results using polar's lazy API, or any other method?
  • Is it possible to batch the results as pl.read_database can? Or is it possible to partition the results, as the docs say is possible with connectorx?
  • Are there any other ways I might use polars to help work with larger-than-memory datasets in this instance? Or do I need to do my processing in SQL so that the data comes into python in a manageable size?

Thanks!

发布评论

评论列表(0)

  1. 暂无评论