I have a dataframe df and I want to to execute a query to insert into a table all the values from the dataframe. Basically I am trying to load as the following query: For that I have the following code: However, I am getting the following error: Does anyone know what I am doing wrong? Answer See below my
Tag: sql-server
Why does SQL Server return numbers like 0.759999 while MySQL returns 0.76?
I have a SQL Server database table which has three columns. As an example, one of the values in this column might be 0.76. This column of data, named ‘paramvalue’ is defined as real. When I use the pyodbc module command fetchall() I get back a number like 0.7599999904632568 instead of 0.76. I’m using Visual Studio 2017 and Python Tools
How to automate running of Jupyter Notebook cells periodically
I want to integrate my jupyter notebook with my website, where I have written the code to fetch real-time data from MySQL server and do real-time visualisation using plotly. But every time I’m having to run all the cells of my Kernel. Is there a way I can automate the running of the Jupyter notebook cells periodically say everyday 1
Building a connection URL for mssql+pyodbc with sqlalchemy.engine.url.URL
The problem… I am trying to connect to a MSSql server via SQLAlchemy. Here is my code with fake credentials (not my real credentials obviously). The code… And this is the .pyodbc error that I am getting. Additional Details But, here is what is weird… I if make a pyodbc connection and use Pandas.read_sql, then I can get data without
Import mssql spatial fields into geopandas/shapely geometry
I cannot seem to be able to directly import mssql spatial fields into geopandas. I can import normal mssql tables into pandas with Pymssql without problems, but I cannot figure out a way to import the spatial fields into shapely geometry. I know that the OGR driver for mssql should be able to handle it, but I’m not skilled enough
Write Large Pandas DataFrames to SQL Server database
I have 74 relatively large Pandas DataFrames (About 34,600 rows and 8 columns) that I am trying to insert into a SQL Server database as quickly as possible. After doing some research, I learned that the good ole pandas.to_sql function is not good for such large inserts into a SQL Server database, which was the initial approach that I took
Python – Using pyodbc to connect to remote server using info from Excel data connection
I have an excel (albeit, one that’s on our company server) that has a data connection to our SQL database so we can make nice pivot tables. I would like to get that data into python (on my local computer) so I can do some faster analysis. I have installed pyodbc. Here is the “connection string” from the excel: and
How do I connect to SQL Server via sqlalchemy using Windows Authentication?
sqlalchemy, a db connection module for Python, uses SQL Authentication (database-defined user accounts) by default. If you want to use your Windows (domain or local) credentials to authenticate to the SQL Server, the connection string must be changed. By default, as defined by sqlalchemy, the connection string to connect to the SQL Server is as follows: This, if used using
Retrieving Data from SQL Using pyodbc
I am trying to retrieve data from an SQL server using pyodbc and print it in a table using Python. However, I can only seem to retrieve the column name and the data type and stuff like that, not the actual data values in each row of the column. Basically I am trying to replicate an Excel sheet that retrieves