问题描述
有2个数据框,其中一个包含如下价格信息(第1天到第100天为100行),
There are 2 Data Frames one contain the price information as follows (day 1 to Day 100 as 100 rows),
StoreId,ItemID, Date,Price
HH-101,item1, d_1, €9
HH-101,item1, d_2, €7
……………………………
DH-101,item1, d_90, €4
……………………………
HH-101,item1, d_100, €3
第二个数据框是如图所示的销售信息(第1天到第100天为100列,但第1行)
The 2nd data frame is a sales Information as shown (day 1 to Day 100 as 100 columns but 1 row)
Stored_ID, ItemID, d-1, d-2, d-3,……. d-90,d-100
HH-101 , item1 , 2 , 4 , 0,………..,12 ,22
HH-101 , item2 , 1 , 0 , 3 ……………,3 ,3
什么是最优 PySpark脚本以生成另一个数据框
What is the optimum PySpark script to produce another Data frame
带有新列,其总和为
单位数量*销售价格,对应于每个商品
number of unit * Sales price , corresponding to each item
example for store HH-101 and item1
2*9+ 4*7+........+.....+...12*4+22*3
是否存在任何单一步骤,而不是将产品总和写成100列以上
Is there any single step instead of wrting sum of product for more than 100 column
推荐答案
这是一个从示例数据帧派生的简单示例.我认为它也应该可扩展到您的真实数据.
Here's a simpler example derived from your sample dataframes. I think it should also be scalable to your real data.
df1.show()
+-------+------+----+-----+
|StoreId|ItemID|Date|Price|
+-------+------+----+-----+
| HH-101| item1| d_1| €9|
| HH-101| item1| d_2| €7|
+-------+------+----+-----+
df2.show()
+-------+------+---+---+
|StoreId|ItemID|d_1|d_2|
+-------+------+---+---+
| HH-101| item1| 2| 4|
| HH-101| item2| 1| 0|
+-------+------+---+---+
您可以使用 stack
取消透视 df2
的操作,该查询字符串是根据列名的列表理解生成的查询字符串,然后使用前三列,按商店ID和商品ID分组,并获得价格*号
的总和.
You can unpivot df2
using stack
with a query string generated from a list comprehension of the column names, then join to df1
using the first 3 columns, group by the store id and item id, and get the sum of price * number
.
result = df2.selectExpr(
'StoreId', 'ItemID',
'stack(2, ' + ', '.join(["'%s', %s" % (c, c) for c in df2.columns[2:]]) + ') as (Date, Number)'
# "stack(2, 'd_1', d_1, 'd_2', d_2) as (Date, Number)"
).join(
df1, df1.columns[:3]
).groupBy(
'StoreId', 'ItemID'
).agg(
F.expr('sum(Number * float(substr(Price, 2))) as Total')
)
result.show()
+-------+------+-----+
|StoreId|ItemID|Total|
+-------+------+-----+
| HH-101| item1| 46.0|
+-------+------+-----+
这篇关于如何使用PySpark从2个数据框中的列获取乘积和的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!