问题描述
我正在使用pyspark,并且有一个数据框对象df
,这是df.printSchema()
的输出的样子
I am using pyspark and I have a dataframe object df
and this is what the output of df.printSchema()
looks like
root
|-- M_MRN: string (nullable = true)
|-- measurements: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Observation_ID: string (nullable = true)
| | |-- Observation_Name: string (nullable = true)
| | |-- Observation_Result: string (nullable = true)
我想过滤掉观察值ID不为"5"或"10"的测量"中的所有数组.因此,当我运行df.select('measurements').take(2)
时,我得到了
I would like to filter out all the arrays in 'measurements' where the Observation_ID is not '5' or '10'. So currently when I run df.select('measurements').take(2)
I get
[Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Name='ABC', Observation_Result='70'),
Row(Observation_ID='10', Observation_Name='ABC', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Name='XYZ', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Name='ZZZ', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='7')])]
在执行完上述过滤并运行df.select('measurements').take(2)
后,我希望得到
I would like that after I do the above filtering and run df.select('measurements').take(2)
I get
[Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='108/72'),
Row(Observation_ID='10', Observation_Name='ABC', Observation_Result='73.029')]),
Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='7')])]
在pyspark中有没有办法做到这一点?谢谢您的帮助!
Is there a way to do this in pyspark? Thank you in anticipation for your help!
推荐答案
从Spark 2.4开始,您可以使用高阶函数FILTER
从数组中滤除元素.因此,如果要删除Observation_ID
不是5或10的元素,则可以按以下步骤进行操作:
Since Spark 2.4, you can use Higher Order Function FILTER
to filter out elements from an array. So if you want to remove elements where Observation_ID
is not 5 or 10, you can do it as follows:
from pyspark.sql.functions import expr
df.withColumn('measurements', expr("FILTER(measurements, x -> x.Observation_ID = 5 OR x.Observation_ID = 10)"))
这篇关于使用PySpark在Spark数据框中删除嵌套结构中的行(文本中的详细信息)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!