问题描述
我试图在 PySpark MLlib (1.3.1) 的 ALS 模型中使用长用户/产品 ID,但遇到了问题.此处给出了代码的简化版本:
I am attempting to use long user/product IDs in the ALS model in PySpark MLlib (1.3.1) and have run into an issue. A simplified version of the code is given here:
from pyspark import SparkContext
from pyspark.mllib.recommendation import ALS, Rating
sc = SparkContext("","test")
# Load and parse the data
d = [ "3661636574,1,1","3661636574,2,2","3661636574,3,3"]
data = sc.parallelize(d)
ratings = data.map(lambda l: l.split(',')).map(lambda l: Rating(long(l[0]), long(l[1]), float(l[2])) )
# Build the recommendation model using Alternating Least Squares
rank = 10
numIterations = 20
model = ALS.train(ratings, rank, numIterations)
运行此代码会产生 java.lang.ClassCastException
,因为该代码试图将 long 转换为整数.查看源代码,ml ALS 类 允许较长的用户/产品 ID,但是 mllib ALS 类 强制使用整数.
Running this code yields a java.lang.ClassCastException
because the code is attempting to convert the longs to integers. Looking through the source code, the ml ALS class in Spark allows for long user/product IDs but then the mllib ALS class forces the use of ints.
问题:是否有在 PySpark ALS 中使用长用户/产品 ID 的解决方法?
Question: Is there a workaround to use long user/product IDs in PySpark ALS?
推荐答案
This is known issue (https://issues.apache.org/jira/browse/SPARK-2465),但不会很快解决,因为将接口更改为长 userId 会减慢计算速度.
This is known issue (https://issues.apache.org/jira/browse/SPARK-2465), but it will not be solved soon, because changing interface to long userId should slowdown computation.
有几个解决方案:
您可以使用 hash() 函数将 userId 散列到 int 中,因为它在少数情况下只会导致随机行压缩,因此冲突不应该影响推荐器的准确性,真的.第一个链接中的讨论.
you can hash userId to int with hash() function, since it cause just random row compression in few cases, collisions shouldn't affect accuracy of your recommender, really. Discussion in first link.
您可以使用 RDD.zipWithUniqueId() 或速度较慢的 RDD.zipWithIndex 生成唯一的 int userId,就像在此线程中一样:如何为 Spark RDD 中的元素分配唯一的连续数字
you can generate unique int userIds with RDD.zipWithUniqueId() or less fast RDD.zipWithIndex, just like in this thread: How to assign unique contiguous numbers to elements in a Spark RDD
这篇关于如何在 PySpark ALS 中使用长用户 ID的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!