问题描述
我有一个数据列,其列为String.我想在PySpark中将列类型更改为Double type.
I have a dataframe with column as String.I wanted to change the column type to Double type in PySpark.
以下是我的方法:
toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))
只是想知道,这是跑步时正确的方法吗?通过Logistic回归,我遇到了一些错误,所以我想知道,这就是麻烦的原因.
Just wanted to know, is this the right way to do it as while runningthrough Logistic Regression, I am getting some error, so I wonder,is this the reason for the trouble.
推荐答案
此处无需UDF. Column
已经提供了 cast
方法与 DataType
实例:
There is no need for an UDF here. Column
already provides cast
method with DataType
instance :
from pyspark.sql.types import DoubleType
changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))
或短字符串:
changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))
其中规范字符串名称(也可以支持其他变体)对应于 simpleString
值.因此对于原子类型:
where canonical string names (other variations can be supported as well) correspond to simpleString
value. So for atomic types:
from pyspark.sql import types
for t in ['BinaryType', 'BooleanType', 'ByteType', 'DateType',
'DecimalType', 'DoubleType', 'FloatType', 'IntegerType',
'LongType', 'ShortType', 'StringType', 'TimestampType']:
print(f"{t}: {getattr(types, t)().simpleString()}")
BinaryType: binary
BooleanType: boolean
ByteType: tinyint
DateType: date
DecimalType: decimal(10,0)
DoubleType: double
FloatType: float
IntegerType: int
LongType: bigint
ShortType: smallint
StringType: string
TimestampType: timestamp
例如复杂类型
types.ArrayType(types.IntegerType()).simpleString()
'array<int>'
types.MapType(types.StringType(), types.IntegerType()).simpleString()
'map<string,int>'
这篇关于如何在PySpark中将数据框列从String类型更改为Double类型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!