I have a large CSV file with 200,000 rows and need to create another CSV because Excel is consuming too much memory. However, since the code is deployed on a t3.small server, it's causing memory issues. Below is the code I'm using:
with open(file_path, 'w', newline='') as f:
for i in range(0, len(dataframe), chunk_size):
chunk = dataframe.iloc[i:i + chunk_size]
chunk.to_csv(f, header=(i == 0), index=False, date_format='%Y-%m-%d %H:%M:%S')
del chunk
gc.collect()
How can I optimize this to reduce memory usage?