问题描述
我有一个数据帧:
+-----+--------+---------+
| usn|log_type|item_code|
+-----+--------+---------+
| 0| 11| I0938|
| 916| 19| I0009|
| 916| 51| I1097|
| 916| 19| C0723|
| 916| 19| I0010|
| 916| 19| I0010|
|12331| 19| C0117|
|12331| 19| C0117|
|12331| 19| I0009|
|12331| 19| I0009|
|12331| 19| I0010|
|12838| 19| I1067|
|12838| 19| I1067|
|12838| 19| C1083|
|12838| 11| B0250|
|12838| 19| C1346|
+-----+--------+---------+
我想要不同的 item_code
并为每个 item_code
创建一个索引,如下所示:
And I want distinct item_code
and make an index for each item_code
like this:
+---------+------+
|item_code| numId|
+---------+------+
| I0938| 0 |
| I0009| 1 |
| I1097| 2 |
| C0723| 3 |
| I0010| 4 |
| C0117| 5 |
| I1067| 6 |
| C1083| 7 |
| B0250| 8 |
| C1346| 9 |
+---------+------+
我不使用 monotonically_increasing_id
因为它返回一个 bigint.
I don't use monotonically_increasing_id
because it returns a bigint.
推荐答案
使用monotanicallly_increasing_id
只保证数字递增,不保证起始编号和连续编号.如果你想确保得到 0,1,2,3,... 你可以使用 RDD 函数 zipWithIndex()
.
Using monotanicallly_increasing_id
only guarantees that the numbers are increasing, the starting number and consecutive numbering is not guaranteed. If you want to be sure to get 0,1,2,3,... you can use the RDD function zipWithIndex()
.
由于我对spark和python不太熟悉,下面的例子使用的是scala,但转换起来应该很容易.
Since I'm not too familiar with spark together with python, the below example is using scala but it should be easy to convert it.
val spark = SparkSession.builder.getOrCreate()
import spark.implicits._
val df = Seq("I0938","I0009","I1097","C0723","I0010","I0010",
"C0117","C0117","I0009","I0009","I0010","I1067",
"I1067","C1083","B0250","C1346")
.toDF("item_code")
val df2 = df.distinct.rdd
.map{case Row(item: String) => item}
.zipWithIndex()
.toDF("item_code", "numId")
哪个会给你请求的结果:
Which will give you the requested result:
+---------+-----+
|item_code|numId|
+---------+-----+
| I0010| 0|
| I1067| 1|
| C0117| 2|
| I0009| 3|
| I1097| 4|
| C1083| 5|
| I0938| 6|
| C0723| 7|
| B0250| 8|
| C1346| 9|
+---------+-----+
这篇关于如何制作整数索引行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!