最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - What design pattern to handle atomicity of data state with pyndatic models when readingwrite the data from a storage da

programmeradmin0浏览0评论

I would like to discuss my design problem in order to evaluate what I am doing right and what I can do better to handle my current problem. To date I create Pyndatic models of business logics saving their state and managing some logics trought Pyndatic features such as validation and computed fields.

The models are then dumped into dict when saving them to a storage database application such as mongodb. Now, assuming I modeled the data with "denormalization" in mind, where I have a big model that contains nested sub-models. how can I avoid stale reads and over writes when multiple actors are i) reading from db into pyndatic, ii) change data with pyndatic iii) saving back to the dict to db?

To date I saw the atomicity problem resolved at the database level, where operations are atomic but here I have step ii) that runs at the application level. Do I have to resolve to using sessions/transactions to lock the database resource, i.e. the pyndatic model/dict under analysis or I am designing things wrong?

For instance:

model_dict = await db.find_one({"id": "my_model_id"}) # read from db, returns the data state as dict
model = BusinessModel.model_validate(model_dict) # convert the dict into a Pyndatic model
# when converting, Pyndatic features such as validation logics and computed fields are run automatically, changing the data
...
model.key1 = "some other processing"
...
sleep(10)
...
model.key2 = "some other processing"
model_dict_updated = model.model_dump() # convert back the Pyndatic model into a dict
await db.replace_one({"id": "my_model_id"}, model_dict_updated) # overwrite the document back into db

I'm reading these references:

  • write conflicts
  • transactions

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论