我有3400万行,只有一列。我想将字符串分成4列。
这是我的样本数据集(df):
Log
0 Apr 4 20:30:33 100.51.100.254 dns,packet user: --- got query from 10.5.14.243:30648:
1 Apr 4 20:30:33 100.51.100.254 dns,packet user: id:78a4 rd:1 tc:0 aa:0 qr:0 ra:0 QUERY 'no error'
2 Apr 4 20:30:33 100.51.100.254 dns,packet user: question: tracking.intl.miui.com:A:IN
3 Apr 4 20:30:33 dns user: query from 9.5.10.243: #4746190 tracking.intl.miui.com. A
我想使用以下代码将其分为四列:
df1 = df['Log'].str.split(n=3, expand=True)
df1.columns=['Month','Date','Time','Log']
df1.head()
这是我期望的结果
Month Date Time Log
0 Apr 4 20:30:33 100.51.100.254 dns,packet user: --- go...
1 Apr 4 20:30:33 100.51.100.254 dns,packet user: id:78a...
2 Apr 4 20:30:33 100.51.100.254 dns,packet user: questi...
3 Apr 4 20:30:33 dns transjakarta: query from 9.5.10.243: #474...
4 Apr 4 20:30:33 100.51.100.254 dns,packet user: --- se...
但是响应是这样的:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-c9b2023fbf3e> in <module>
----> 1 df1 = df['Log'].str.split(n=3, expand=True)
2 df1.columns=['Month','Date','Time','Log']
3 df1.head()
TypeError: split() got an unexpected keyword argument 'expand'
是否有使用dask拆分字符串的解决方案?
最佳答案
编辑:这现在有效
如果您还提供了一个n=
关键字来告诉它预期要进行多少次拆分,则Dask数据框确实支持str.split方法的expand =关键字。
旧答案
看起来dask数据框的str.split
方法未实现expand =关键字。如果还不存在,您可能会提出一个问题。
作为一种短期解决方法,您可以创建一个Pandas函数,然后使用map_partitions方法在整个dask数据框中缩放它
def f(df: pandas.DataFrame) -> pandas.DataFrame:
""" This is your code from above, as a function """
df1 = df['Log'].str.split(n=3, expand=True)
df1.columns=['Month','Date','Time','Log']
return df
ddf = ddf.map_partitions(f) # apply to all pandas dataframes within dask dataframe
因为Dask数据框只是Pandas数据框的集合,所以当Dask数据框不支持它们时,自己构建东西相对容易。
关于python - Str在Dask Dataframe中进行扩展,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/55789244/