问题描述
任何人以前都经历过吗?
Has anyone experienced this before?
我有一个包含"int"和"varchar"列的表-一个报告计划表.
I have a table with "int" and "varchar" columns - a report schedule table.
我正在尝试使用python程序将扩展名为".xls"的excel文件导入该表.我正在使用pandas to_sql读取1行数据.
I am trying to import an excel file with ".xls" extension to this table using a python program. I am using pandas to_sql to read in 1 row of data.
导入的数据为1行11列.
Data imported is 1 row 11 columns.
导入成功完成,但是在导入之后,我注意到原始表中的数据类型现已更改为:
Import works successfully but after the import I noticed that the datatypes in the original table have now been altered from:
int --> bigint
char(1) --> varchar(max)
varchar(30) --> varchar(max)
有什么主意我可以预防这种情况吗?数据类型的切换导致下调例程中出现问题.
Any idea how I can prevent this? The switch in datatypes is causing issues in downstrean routines.
df = pd.read_excel(schedule_file,sheet_name='Schedule')
params = urllib.parse.quote_plus(r'DRIVER={SQL Server};SERVER=<<IP>>;DATABASE=<<DB>>;UID=<<UDI>>;PWD=<<PWD>>')
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine = create_engine(conn_str)
table_name='REPORT_SCHEDULE'
df.to_sql(name=table_name,con=engine, if_exists='replace',index=False)
TIA
推荐答案
考虑使用dtype 参数/generation/pandas.DataFrame.to_sql.html"rel =" nofollow noreferrer> pandas.DataFrame.to_sql
,您在其中传递 SQLAlchemy类型到命名列:
Consider using the dtype argument of pandas.DataFrame.to_sql
where you pass a dictionary of SQLAlchemy types to named columns:
import sqlalchemy
...
data.to_sql(name=table_name, con=engine, if_exists='replace', index=False,
dtype={'name_of_datefld': sqlalchemy.DateTime(),
'name_of_intfld': sqlalchemy.types.INTEGER(),
'name_of_strfld': sqlalchemy.types.VARCHAR(length=30),
'name_of_floatfld': sqlalchemy.types.Float(precision=3, asdecimal=True),
'name_of_booleanfld': sqlalchemy.types.Boolean}
这篇关于 pandas to_sql更改数据库表中的数据类型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!