问题描述
使用Spark和Java,我试图将具有n列的Integer标识列添加到现有的Dataset [Row]中.
With Spark and Java, I am trying to add to an existing Dataset[Row] with n columns an Integer identify column.
即使使用 monotonically_increasing_id()
,我也成功地使用 zipWithUniqueId()
或 zipWithIndex
添加了ID.但是没有人感到满意.
I successfully added an id with zipWithUniqueId()
or with zipWithIndex
, even using monotonically_increasing_id()
. But neither one gives satisfaction.
示例:我有一个包含195行的数据集.当我使用这三种方法之一时,我会得到一些 1584156487 或 12036 的ID.另外,这些ID不是连续的.
Example : I have one dataset with 195 rows. When I use one of these three methods, i get some id like 1584156487 or 12036. Plus, those id's are not contiguous.
我需要/想要的很简单:一个整数id列,该值对每一行的dataset.count()都为1,其中id = 1后跟id = 2,依此类推.
What i need/want is rather simply : an Integer id column, which value goes 1 to dataset.count() foreach row, where id = 1 is followed by id = 2, etc.
如何在Java/Spark中做到这一点?
How can I do that in Java/Spark ?
推荐答案
您可以尝试使用行号函数:
在Java中:
import org.apache.spark.sql.functions;
import org.apache.spark.sql.expressions.Window;
df.withColumn("id", functions.row_number().over(Window.orderBy("a column")));
或者在scala中:
import org.apache.spark.sql.expressions.Window;
df.withColumn("id",row_number().over(Window.orderBy("a column")))
这篇关于Java&Spark:向数据集添加唯一的增量ID的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!