I am trying to write several pandas dataframes (of the same format) to a postgres db, however some of them randomly are not written to the db. In that case to_sql silently fails and returns -1 (indicating the failure).
I don't use any schema which should rule out this issue as a possible cause, and I am not using SQL Server either. What totally strikes me is the fact that some of these dataframes are written to the db and some are not.
code:
from sqlalchemy import create_engine, inspect, DateTime
import psycopg
engine = create_engine('postgresql+psycopg://plantwatch:[email protected]/plantwatch')
df.to_sql('power', con=engine2, if_exists='append', index=False, dtype={'produced_at': DateTime})
example df (for each dataframe one id is written to the db) and expected db content :
produced_at id value
2015-01-01 00:00:00 someid 1
2015-01-01 01:00:00 someid 2
2015-01-01 00:00:00 someid2 1
2015-01-01 01:00:00 someid2 2
actual db content:
produced_at id value
2015-01-01 00:00:00 someid 1
2015-01-01 01:00:00 someid 2
A whacky workaround would be to dump all dataframes to .csv files and import each of them one by one to postgres, but there has to be another way.