从pyspark.ml.feature导入from the spark documentation , below code produces similar outputsfrom pyspark.ml.feature import QuantileDiscretizerfrom pyspark.ml.feature import Bucketizerdata = [(0, 18.0), (1, 19.0), (2, 8.0), (3, 5.0), (4, 2.2)]df = spark.createDataFrame(data, ["id", "hour"])result_discretizer = QuantileDiscretizer(numBuckets=3, inputCol="hour",outputCol="result").fit(df).transform(df)result_discretizer.show()splits = [-float("inf"),3, 10,float("inf")]result_bucketizer = Bucketizer(splits=splits, inputCol="hour",outputCol="result").transform(df)result_bucketizer.show()输出:+---+----+------+| id|hour|result|+---+----+------+| 0|18.0| 2.0|| 1|19.0| 2.0|| 2| 8.0| 1.0|| 3| 5.0| 1.0|| 4| 2.2| 0.0|+---+----+------++---+----+------+| id|hour|result|+---+----+------+| 0|18.0| 2.0|| 1|19.0| 2.0|| 2| 8.0| 1.0|| 3| 5.0| 1.0|| 4| 2.2| 0.0|+---+----+------+请让我知道一个人相对于另一个人是否有明显的优势吗?Please let me know if there is any significant advantage of one over other?推荐答案 QuantileDiscretizer 根据数据确定存储分区.QuantileDiscretizer determines the bucket splits based on the data. Bucketizer 将数据放入您通过 splits 指定的存储桶中.Bucketizer puts data into buckets that you specify via splits.因此,当您知道所需的存储桶时,请使用 Bucketizer ,并使用 QuantileDiscretizer 为您估算拆分.So use Bucketizer when you know the buckets you want, and QuantileDiscretizer to estimate the splits for you.示例中的输出相似是由于人为设计的数据和选择的 splits .在其他情况下,结果可能会有很大差异.That the outputs are similar in the example is due to the contrived data and the splits chosen. Results may vary significantly in other scenarios. 这篇关于Spark中QuantileDiscretizer和Bucketizer之间的区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
09-23 02:14