本文介绍了根据pyspark中的现有列值创建新列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有一个数据框,其中有一个包含机场名称的现有列,并且我想用其缩写创建另一个列.
I have a data frame that has an existing column with airport names, and I want to create another column with their abbreviations.
例如,我有一个具有以下值的现有列:
For example, I have an existing column with the following values:
SEATTLE TACOMA AIRPORT, WA US
MIAMI INTERNATIONAL AIRPORT, FL US
SAN FRANCISCO INTERNATIONAL AIRPORT, CA US
MIAMI INTERNATIONAL AIRPORT, FL US
MIAMI INTERNATIONAL AIRPORT, FL US
SAN FRANCISCO INTERNATIONAL AIRPORT, CA US
SEATTLE TACOMA AIRPORT, WA US
我想创建一个新列及其相关的缩写,例如SEA,MIA和SFO.我当时以为可以使用for循环来实现这一点,但是我不确定如何准确地对其进行编码.
I would like to create a new column with their associated abbreviations, e.g SEA, MIA, and SFO. I was thinking I can use for loop to achieve that, but I am not so sure how to code it exactly.
推荐答案
以下是两种示例方法:
- 使用字典和UDF
- 使用第二个DataFrame与之连接
来自pyspark.sql.functions的
from pyspark.sql.functions import col, udf, StringType
s = """\
SEATTLE TACOMA AIRPORT, WA US
MIAMI INTERNATIONAL AIRPORT, FL US
SAN FRANCISCO INTERNATIONAL AIRPORT, CA US
MIAMI INTERNATIONAL AIRPORT, FL US
MIAMI INTERNATIONAL AIRPORT, FL US
SAN FRANCISCO INTERNATIONAL AIRPORT, CA US
SEATTLE TACOMA AIRPORT, WA US"""
abbr = {
"SEATTLE TACOMA AIRPORT": "SEA",
"MIAMI INTERNATIONAL AIRPORT": "MIA",
"SAN FRANCISCO INTERNATIONAL AIRPORT": "SFO",
}
df = spark.read.csv(sc.parallelize(s.splitlines()))
print("=== df ===")
df.show()
# =================================
# 1. using a UDF
# =================================
print("=== using a UDF ===")
udf_airport_to_abbr = udf(lambda airport: abbr[airport], StringType())
df.withColumn("abbr", udf_airport_to_abbr("_c0")).show()
# =================================
# 2. using a join
# =================================
# you may want to create this df in some different way ;)
df_abbrs = spark.read.csv(sc.parallelize(["%s,%s" % x for x in abbr.items()]))
print("=== df_abbrs ===")
df_abbrs.show()
print("=== using a join ===")
df.join(df_abbrs, on="_c0").show()
输出:
=== df ===
+--------------------+------+
| _c0| _c1|
+--------------------+------+
|SEATTLE TACOMA AI...| WA US|
|MIAMI INTERNATION...| FL US|
|SAN FRANCISCO INT...| CA US|
|MIAMI INTERNATION...| FL US|
|MIAMI INTERNATION...| FL US|
|SAN FRANCISCO INT...| CA US|
|SEATTLE TACOMA AI...| WA US|
+--------------------+------+
=== using a UDF ===
+--------------------+------+----+
| _c0| _c1|abbr|
+--------------------+------+----+
|SEATTLE TACOMA AI...| WA US| SEA|
|MIAMI INTERNATION...| FL US| MIA|
|SAN FRANCISCO INT...| CA US| SFO|
|MIAMI INTERNATION...| FL US| MIA|
|MIAMI INTERNATION...| FL US| MIA|
|SAN FRANCISCO INT...| CA US| SFO|
|SEATTLE TACOMA AI...| WA US| SEA|
+--------------------+------+----+
=== df_abbrs ===
+--------------------+---+
| _c0|_c1|
+--------------------+---+
|SEATTLE TACOMA AI...|SEA|
|MIAMI INTERNATION...|MIA|
|SAN FRANCISCO INT...|SFO|
+--------------------+---+
=== using a join ===
+--------------------+------+---+
| _c0| _c1|_c1|
+--------------------+------+---+
|SEATTLE TACOMA AI...| WA US|SEA|
|SEATTLE TACOMA AI...| WA US|SEA|
|SAN FRANCISCO INT...| CA US|SFO|
|SAN FRANCISCO INT...| CA US|SFO|
|MIAMI INTERNATION...| FL US|MIA|
|MIAMI INTERNATION...| FL US|MIA|
|MIAMI INTERNATION...| FL US|MIA|
+--------------------+------+---+
这篇关于根据pyspark中的现有列值创建新列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!