Skip to content
Advertisement

pandas to_sql all columns as nvarchar

I have a pandas dataframe that is dynamically created with columns names that vary. I’m trying to push them to sql, but don’t want them to go to mssqlserver as the default datatype “text” (can anyone explain why this is the default? Wouldn’t it make sense to use a more common datatype?)

Does anyone know how I can specify a datatype for all columns?

column_errors.to_sql('load_errors',push_conn, if_exists = 'append', index = False, dtype = #Data type for all columns#)

the dtype argument takes a dict, and since I don’t know what the columns will be it is hard to set them all to be ‘sqlalchemy.types.NVARCHAR’

This is what I would like to do:

column_errors.to_sql('load_errors',push_conn, if_exists = 'append', index = False, dtype = 'sqlalchemy.types.NVARCHAR')

Any help/understanding of how best to specify all column types would be much appreciated!

Advertisement

Answer

You can create this dict dynamically if you do not know the column names in advance:

from sqlalchemy.types import NVARCHAR
df.to_sql(...., dtype={col_name: NVARCHAR for col_name in df})

Note that you have to pass the sqlalchemy type object itself (or an instance to specify parameters like NVARCHAR(length=10)) and not a string as in your example.

User contributions licensed under: CC BY-SA
6 People found this is helpful
Advertisement