Skip to content
Advertisement

How To Prevent Fast Execute Many Giving Truncation Error

Not sure how to reproduce this as I’m not sure what is causing the issue in the first place, however, I have the currently line of code which writes data to an SQL Server Table.

dbEngine = sqlalchemy.create_engine(constring, fast_executemany=True) 

try:
    df_to_sql.to_sql(table_name, con=dbEngine, schema='dbo', if_exists='append', index=False, chunksize=50)
except:
    df_from_sql = pd.read_sql(f'SELECT * FROM {table_name}', engine)
    new_data = pd.concat([df_from_sql, df_to_sql])
    new_data.to_sql(table_name, con=dbEngine, schema='dbo', if_exists='replace', index=False, chunksize=50)

However I get the following error:

('String data, right truncation: length 1000 buffer 510', 'HY000')

I have checked the sql server and see that varchar has been set to -1 which indicates that the field is of max length? Any idea what could be causing this?

Cheers

Advertisement

Answer

The issue was I was using an incorrect engine and not replacing the table everytime a new column was found. Changing the engine fixed this and now everything writes to the database as required.

User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement