本文介绍了使用pySpark将DataFrame写入mysql表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试将记录插入到MySql
表中.该表包含id
和name
作为列.
I am attempting to insert records into a MySql
table. The table contains id
and name
as columns.
我在pyspark
shell中像下面这样做.
I am doing like below in a pyspark
shell.
name = 'tester_1'
id = '103'
import pandas as pd
l = [id,name]
df = pd.DataFrame([l])
df.write.format('jdbc').options(
url='jdbc:mysql://localhost/database_name',
driver='com.mysql.jdbc.Driver',
dbtable='DestinationTableName',
user='your_user_name',
password='your_password').mode('append').save()
我收到以下属性错误
AttributeError: 'DataFrame' object has no attribute 'write'
我做错了什么?什么是将记录从pySpark
What am I doing wrong? What is the correct method to insert records into a MySql
table from pySpark
推荐答案
所以最终的代码可能是
data =['103', 'tester_1']
df = sc.parallelize(data).toDF(['id', 'name'])
df.write.format('jdbc').options(
url='jdbc:mysql://localhost/database_name',
driver='com.mysql.jdbc.Driver',
dbtable='DestinationTableName',
user='your_user_name',
password='your_password').mode('append').save()
这篇关于使用pySpark将DataFrame写入mysql表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!