Skip to content
Advertisement

Can’t append to an existing table. Fails silently

I’m trying to dump a pandas DataFrame into an existing Snowflake table (via a jupyter notebook). When I run the code below no errors are raised, but no data is written to the destination SF table (df has ~800 rows).

from sqlalchemy import create_engine
from snowflake.sqlalchemy import URL

sf_engine = create_engine(
    URL(
        user=os.environ['SF_PROD_EID'],
        password=os.environ['SF_PROD_PWD'],
        account=account,
        warehouse=warehouse,
        database=database,
    )
)

df.to_sql(
    "test_table",
    con=sf_engine,
    schema=schema,
    if_exists="append",
    index=False,
    chunksize=16000,
)

If I check the SF History, I can see that the queries apparently ran without issue:

enter image description here

If I pull the query from the SF History UI and run it manually in the Snowflake UI the data shows up in the destination table.

If I try to use locopy I run into the same issue.

If the table does not exist before hand, the same code above creates the table and drops the rows no problem.

Here’s where it gets weird. When I run the pd.to_sql command to try and append and then drop the destination table, if I then issue a select count(*) from destination_table a table still exists with that name and has (only) the data that I’ve been trying to drop. Thinking it may be a case-sensitive table naming situation?

Any insight is appreciated :)

Advertisement

Answer

Issue was due how I set up the database connection and the case-sensitivity of the table name. Turns out that I was writing to a table called DB.SCHEMA."db.schema.test_table" (note that the db.schema slug turns into part of the table name). Don’t be like me kids. Use upper-case table names in Snowflake!

Advertisement