其他警告除了上述注意事项外,还值得一提的是 apply 在第一行(或列)上操作两次.这样做是为了确定该功能是否有任何副作用.如果没有,则 apply 可能能够使用快速路径来评估结果,否则将退回到缓慢的实施方式. df = pd.DataFrame({'A':[1、2],'B':['x','y']})def func(x):打印(x ['A'])返回xdf.apply(func,axis = 1)#1#1#2甲乙0 1 x1 2年 在熊猫版本< 0.25的 GroupBy.apply 中也可以看到此行为(已针对0.25进行了修复,请参阅此处以获取更多信息.)I have seen many answers posted to questions on Stack Overflow involving the use of the Pandas method apply. I have also seen users commenting under them saying that "apply is slow, and should be avoided".I have read many articles on the topic of performance that explain apply is slow. I have also seen a disclaimer in the docs about how apply is simply a convenience function for passing UDFs (can't seem to find that now). So, the general consensus is that apply should be avoided if possible. However, this raises the following questions:If apply is so bad, then why is it in the API?How and when should I make my code apply-free?Are there ever any situations where apply is good (better than other possible solutions)? 解决方案 apply, the Convenience Function you Never NeededWe start by addressing the questions in the OP, one by one.DataFrame.apply and Series.apply are convenience functions defined on DataFrame and Series object respectively. apply accepts any user defined function that applies a transformation/aggregation on a DataFrame. apply is effectively a silver bullet that does whatever any existing pandas function cannot do.Some of the things apply can do:Run any user-defined function on a DataFrame or SeriesApply a function either row-wise (axis=1) or column-wise (axis=0) on a DataFramePerform index alignment while applying the functionPerform aggregation with user-defined functions (however, we usually prefer agg or transform in these cases)Perform element-wise transformationsBroadcast aggregated results to original rows (see the result_type argument).Accept positional/keyword arguments to pass to the user-defined functions....Among others. For more information, see Row or Column-wise Function Application in the documentation.So, with all these features, why is apply bad? It is because apply is slow. Pandas makes no assumptions about the nature of your function, and so iteratively applies your function to each row/column as necessary. Additionally, handling all of the situations above means apply incurs some major overhead at each iteration. Further, apply consumes a lot more memory, which is a challenge for memory bounded applications.There are very few situations where apply is appropriate to use (more on that below). If you're not sure whether you should be using apply, you probably shouldn't.Let's address the next question.To rephrase, here are some common situations where you will want to get rid of any calls to apply.Numeric DataIf you're working with numeric data, there is likely already a vectorized cython function that does exactly what you're trying to do (if not, please either ask a question on Stack Overflow or open a feature request on GitHub).Contrast the performance of apply for a simple addition operation.df = pd.DataFrame({"A": [9, 4, 2, 1], "B": [12, 7, 5, 4]})df A B0 9 121 4 72 2 53 1 4<!- ->df.apply(np.sum)A 16B 28dtype: int64df.sum()A 16B 28dtype: int64Performance wise, there's no comparison, the cythonized equivalent is much faster. There's no need for a graph, because the difference is obvious even for toy data.%timeit df.apply(np.sum)%timeit df.sum()2.22 ms ± 41.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)471 µs ± 8.16 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)Even if you enable passing raw arrays with the raw argument, it's still twice as slow.%timeit df.apply(np.sum, raw=True)840 µs ± 691 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)Another example:df.apply(lambda x: x.max() - x.min())A 8B 8dtype: int64df.max() - df.min()A 8B 8dtype: int64%timeit df.apply(lambda x: x.max() - x.min())%timeit df.max() - df.min()2.43 ms ± 450 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)1.23 ms ± 14.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)In general, seek out vectorized alternatives if possible.String/RegexPandas provides "vectorized" string functions in most situations, but there are rare cases where those functions do not... "apply", so to speak.A common problem is to check whether a value in a column is present in another column of the same row.df = pd.DataFrame({ 'Name': ['mickey', 'donald', 'minnie'], 'Title': ['wonderland', "welcome to donald's castle", 'Minnie mouse clubhouse'], 'Value': [20, 10, 86]})df Name Value Title0 mickey 20 wonderland1 donald 10 welcome to donald's castle2 minnie 86 Minnie mouse clubhouseThis should return the row second and third row, since "donald" and "minnie" are present in their respective "Title" columns.Using apply, this would be done usingdf.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)0 False1 True2 Truedtype: booldf[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)] Name Title Value1 donald welcome to donald's castle 102 minnie Minnie mouse clubhouse 86However, a better solution exists using list comprehensions.df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]] Name Title Value1 donald welcome to donald's castle 102 minnie Minnie mouse clubhouse 86<!- ->%timeit df[df.apply(lambda x: x['Name'].lower() in x['Title'].lower(), axis=1)]%timeit df[[y.lower() in x.lower() for x, y in zip(df['Title'], df['Name'])]]2.85 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)788 µs ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)The thing to note here is that iterative routines happen to be faster than apply, because of the lower overhead. If you need to handle NaNs and invalid dtypes, you can build on this using a custom function you can then call with arguments inside the list comprehension.For more information on when list comprehensions should be considered a good option, see my writeup: Are for-loops in pandas really bad? When should I care?.A Common Pitfall: Exploding Columns of Listss = pd.Series([[1, 2]] * 3)s0 [1, 2]1 [1, 2]2 [1, 2]dtype: objectPeople are tempted to use apply(pd.Series). This is horrible in terms of performance.s.apply(pd.Series) 0 10 1 21 1 22 1 2A better option is to listify the column and pass it to pd.DataFrame.pd.DataFrame(s.tolist()) 0 10 1 21 1 22 1 2<!- ->%timeit s.apply(pd.Series)%timeit pd.DataFrame(s.tolist())2.65 ms ± 294 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)816 µs ± 40.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)Lastly,Apply is a convenience function, so there are situations where the overhead is negligible enough to forgive. It really depends on how many times the function is called.Functions that are Vectorized for Series, but not DataFramesWhat if you want to apply a string operation on multiple columns? What if you want to convert multiple columns to datetime? These functions are vectorized for Series only, so they must be applied over each column that you want to convert/operate on.df = pd.DataFrame( pd.date_range('2018-12-31','2019-01-31', freq='2D').date.astype(str).reshape(-1, 2), columns=['date1', 'date2'])df date1 date20 2018-12-31 2019-01-021 2019-01-04 2019-01-062 2019-01-08 2019-01-103 2019-01-12 2019-01-144 2019-01-16 2019-01-185 2019-01-20 2019-01-226 2019-01-24 2019-01-267 2019-01-28 2019-01-30df.dtypesdate1 objectdate2 objectdtype: objectThis is an admissible case for apply:df.apply(pd.to_datetime, errors='coerce').dtypesdate1 datetime64[ns]date2 datetime64[ns]dtype: objectNote that it would also make sense to stack, or just use an explicit loop. All these options are slightly faster than using apply, but the difference is small enough to forgive.%timeit df.apply(pd.to_datetime, errors='coerce')%timeit pd.to_datetime(df.stack(), errors='coerce').unstack()%timeit pd.concat([pd.to_datetime(df[c], errors='coerce') for c in df], axis=1)%timeit for c in df.columns: df[c] = pd.to_datetime(df[c], errors='coerce')5.49 ms ± 247 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)3.94 ms ± 48.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)3.16 ms ± 216 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)2.41 ms ± 1.71 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)You can make a similar case for other operations such as string operations, or conversion to category.u = df.apply(lambda x: x.str.contains(...))v = df.apply(lambda x: x.astype(category))v/su = pd.concat([df[c].str.contains(...) for c in df], axis=1)v = df.copy()for c in df: v[c] = df[c].astype(category)And so on...Converting Series to str: astype versus applyThis seems like an idiosyncrasy of the API. Using apply to convert integers in a Series to string is comparable (and sometimes faster) than using astype.The graph was plotted using the perfplot library.import perfplotperfplot.show( setup=lambda n: pd.Series(np.random.randint(0, n, n)), kernels=[ lambda s: s.astype(str), lambda s: s.apply(str) ], labels=['astype', 'apply'], n_range=[2**k for k in range(1, 20)], xlabel='N', logx=True, logy=True, equality_check=lambda x, y: (x == y).all())With floats, I see the astype is consistently as fast as, or slightly faster than apply. So this has to do with the fact that the data in the test is integer type.GroupBy operations with chained transformationsGroupBy.apply has not been discussed until now, but GroupBy.apply is also an iterative convenience function to handle anything that the existing GroupBy functions do not.One common requirement is to perform a GroupBy and then two prime operations such as a "lagged cumsum":df = pd.DataFrame({"A": list('aabcccddee'), "B": [12, 7, 5, 4, 5, 4, 3, 2, 1, 10]})df A B0 a 121 a 72 b 53 c 44 c 55 c 46 d 37 d 28 e 19 e 10<!- ->You'd need two successive groupby calls here:df.groupby('A').B.cumsum().groupby(df.A).shift()0 NaN1 12.02 NaN3 NaN4 4.05 9.06 NaN7 3.08 NaN9 1.0Name: B, dtype: float64Using apply, you can shorten this to a a single call.df.groupby('A').B.apply(lambda x: x.cumsum().shift())0 NaN1 12.02 NaN3 NaN4 4.05 9.06 NaN7 3.08 NaN9 1.0Name: B, dtype: float64It is very hard to quantify the performance because it depends on the data. But in general, apply is an acceptable solution if the goal is to reduce a groupby call (because groupby is also quite expensive).Other CaveatsAside from the caveats mentioned above, it is also worth mentioning that apply operates on the first row (or column) twice. This is done to determine whether the function has any side effects. If not, apply may be able to use a fast-path for evaluating the result, else it falls back to a slow implementation.df = pd.DataFrame({ 'A': [1, 2], 'B': ['x', 'y']})def func(x): print(x['A']) return xdf.apply(func, axis=1)# 1# 1# 2 A B0 1 x1 2 yThis behaviour is also seen in GroupBy.apply on pandas versions <0.25 (it was fixed for 0.25, see here for more information.) 这篇关于我什么时候应该(不希望)在代码中使用pandas apply()?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
09-05 18:00