最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - Conditional running total based on date field in Pandas - Stack Overflow

programmeradmin0浏览0评论

I have a dataframe with below data.

DateTime Tag Qty
2025-01-01 13:00 1 270
2025-01-03 13:22 1 32
2025-01-10 12:33 2 44
2025-01-22 10:04 2 120
2025-01-29 09:30 3 182
2025-02-02 15:05 1 216

I have a dataframe with below data.

DateTime Tag Qty
2025-01-01 13:00 1 270
2025-01-03 13:22 1 32
2025-01-10 12:33 2 44
2025-01-22 10:04 2 120
2025-01-29 09:30 3 182
2025-02-02 15:05 1 216

To be achieved: 2 new columns, first with cumulative sum of Qty until the DateTime on each row when Tag is not equal to 2, second with cumulative sum of Qty until the DateTime on each row when Tag is equal to 2. Below is the result I am looking for.

DateTime Tag Qty RBQ RSQ
2025-01-01 13:00 1 270 270 0
2025-01-03 13:22 1 32 302 0
2025-01-10 12:33 2 44 302 44
2025-01-22 10:04 2 120 302 164
2025-01-29 09:30 3 182 484 164
2025-02-02 15:05 1 216 600 164

I've been searching for a method, but doesn't seem to be getting it right. May I please get help on getting it right?

Thanks,

Share Improve this question asked Mar 30 at 5:26 Abdul Gaffoor G KAbdul Gaffoor G K 233 bronze badges
Add a comment  | 

2 Answers 2

Reset to default 2

You can use a condition and the methods mask and where to create both columns

cond = df['Tag'].eq(2)
df['RBQ'] = df['Qty'].mask(cond, 0).cumsum()
df['RSQ'] = df['Qty'].where(cond, 0).cumsum()

Another solution is to use the same condition and pivot the dataframe based on that.

df2 = (df.join(df.assign(cols=df['Tag'].eq(2).map({True: 'RSQ', False: 'RBQ'}))
               .pivot(columns='cols', values='Qty')
               .fillna(0).cumsum())
      )

End result:

        DateTime  Tag  Qty   RBQ   RSQ
2025-01-01 13:00    1  270 270.0   0.0
2025-01-03 13:22    1   32 302.0   0.0
2025-01-10 12:33    2   44 302.0  44.0
2025-01-22 10:04    2  120 302.0 164.0
2025-01-29 09:30    3  182 484.0 164.0
2025-02-02 15:05    1  216 700.0 164.0

Edit: Here is a slightly modified version that takes in consideration the DateTime column. I modified the first value in the datetime column as an example.

cond = df['Tag'].eq(2)
tmp = df.sort_values(by='DateTime')

df['RBQ'] = tmp['Qty'].mask(cond, 0).cumsum()
df['RSQ'] = tmp['Qty'].where(cond, 0).cumsum()

For the second solution, you will have to use merge instead of join.

df2 = pd.merge(df, (df.assign(cols=df['Tag'].eq(2).map({True: 'RSQ', False: 'RBQ'}))
                   .pivot(index='DateTime', columns='cols', values='Qty')
                   .fillna(0).cumsum()), on='DateTime')

End result:

           DateTime  Tag  Qty   RBQ   RSQ
2025-02-04 13:00:00    1  270 700.0 164.0
2025-01-03 13:22:00    1   32  32.0   0.0
2025-01-10 12:33:00    2   44  32.0  44.0
2025-01-22 10:04:00    2  120  32.0 164.0
2025-01-29 09:30:00    3  182 214.0 164.0
2025-02-02 15:05:00    1  216 430.0 164.0

Just filter by Tag column and cumsum():

df["RBQ"]=df[df["Tag"]!=2]["Qty"].cumsum()
df["RSQ"]=df[df["Tag"]==2]["Qty"].cumsum()
df1=df.ffill().fillna(0)
df1.to_markdown()
DateTime Tag Qty RBQ RSQ
0 2025-01-01 13:00 1 270 270 0
1 2025-01-03 13:22 1 32 302 0
2 2025-01-10 12:33 2 44 302 44
3 2025-01-22 10:04 2 120 302 164
4 2025-01-29 09:30 3 182 484 164
5 2025-02-02 15:05 1 216 700 164
发布评论

评论列表(0)

  1. 暂无评论