本文介绍了Monad部分的应用部分可以比Monad部分更优化的monad的示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
在一次讨论中,我听说某些解析器的 Applicative 接口的实现方式不同,它们比 Monad 接口更高效。原因在于,在整个有效计算运行之前,使用 Applicative 事先知道所有效应。对于monad,效果可能取决于计算过程中的值,因此无法进行优化。
我希望看到一些很好的例子。它可以是一些非常简单的解析器或一些不同的monad,这并不重要。重要的是这样一个monad的 Applicative 接口符合它的 return 和 ap ,但使用 Applicative 可产生更高效的代码。
更新: 为了澄清,这里我对不能成为单子的应用程序不感兴趣。这个问题是关于两者兼而有之的。 解决方案
另一个例子是严格的左边折叠。您可以编写一个应用程序实例,该实例允许您组合折叠,以便可以在单个传递和恒定空间中对数据执行折叠结果。但是,monad实例需要从每个绑定数据的开头重新迭代,并将整个列表保存在内存中。
{ - #LANGUAGE GADTs# - }
import Criterion.Main
import Data.Monoid
import Control.Applicative
import Control.Monad
前导隐藏(总和)
数据折叠er其中
Step ::!(a - > e - > a) - > !a - > !(a - > r) - >折叠e
Bind ::!(折叠e r) - > !(r - >折叠e s) - >折叠es
数据P ab = P!a!b
实例Functor(折叠e)其中
fmap f(步骤步骤acc ret)=步骤步骤acc (f。ret)
fmap f(Bind fld g)=绑定fld(fmap f。g)
实例(折叠e)其中
pure a =步骤const a id
Step fstep facc fret< *> Step xstep xacc xret = Step step acc ret其中
step(P fa xa)e = P(fstep fa e)(xstep xa e)
acc = P facc xacc
ret(P fa xa)=(fret fa)(xret xa)
Bind fld g * fldx = Bind fld((* fldx)。g)
fldf * (fldf *)g)
实例Monad(折叠e)其中
return =纯
(>> = )=绑定
fold ::折叠呃 - > [e] - > r
fold(Step_acc ret)[] = ret acc
fold(step step acc ret)(x:xs)= fold(Step step(step acc x)ret)xs
fold(Bind fld g)lst = fold(g $ fold fld lst)lst
monoidalFold :: Monoid m => (e - > m) - > (m→r)→>折叠e
monoidalFold f g =步骤(\ a - > mappend a。f)mempty g
count :: Num n =>折叠e
count = monoidalFold(const(Sum 1))getSum
sum :: Num n =>折叠nn
sum = monoidalFold Sum getSum
avgA ::折叠双倍
avgA = liftA2(/)总计数
avgM ::折叠双倍Double
avgM = liftM2(/)总数
main :: IO()
main = defaultMain
[benchMonadic$ nf(test avgM)1000000
,benchApplicative$ nf(test avgA)1000000
] where test fn = fold f [1..n]
我从头开始写了上面的例子,所以它可能不是适用于一次性折叠和单次折叠的最佳实现,但运行上面的代码给了我:
基准Monadic
的平均值:119.3114 ms,lb 118.8383 ms,ub 120.2822 ms,ci 0.950
std dev:3.339376 ms ,lb 2.012613 ms,ub 6.215090 ms,ci 0.950
基准应用
的平均值:51.95634 ms,lb 51.81261 ms,ub 52.15113 ms,ci 0.950
std dev:850.1623 us, lb 667.6838 us,ub 1.127035 ms,ci 0.950
In one discussion I heard that Applicative interface of some parsers is implemented differently, more efficiently than their Monad interface. The reason is that with Applicative we know all "effects" in advance, before the whole effectful computation is run. With monads, effects can depend on values during the computation so this optimization is not possible.
I'd like to see some good examples of this. It can be some very simple parser or some different monad, that's not important. The important thing is that the Applicative interface of such a monad complies with its return and ap, but using the Applicative produces more efficient code.
Update: Just to clarify, here I'm not interested in applicatives that can't be monads. The question is about things that are both.
解决方案
Another example is a strict left fold. You can write an applicative instance which allows you to compose folds so that the resulting fold can be performed on the data in a single pass and constant space. However, the monad instance needs to re-iterate from the beginning of the data for each bind and keep the whole list in memory.
{-# LANGUAGE GADTs #-}
import Criterion.Main
import Data.Monoid
import Control.Applicative
import Control.Monad
import Prelude hiding (sum)
data Fold e r where
Step :: !(a -> e -> a) -> !a -> !(a -> r) -> Fold e r
Bind :: !(Fold e r) -> !(r -> Fold e s) -> Fold e s
data P a b = P !a !b
instance Functor (Fold e) where
fmap f (Step step acc ret) = Step step acc (f . ret)
fmap f (Bind fld g) = Bind fld (fmap f . g)
instance Applicative (Fold e) where
pure a = Step const a id
Step fstep facc fret <*> Step xstep xacc xret = Step step acc ret where
step (P fa xa) e = P (fstep fa e) (xstep xa e)
acc = P facc xacc
ret (P fa xa) = (fret fa) (xret xa)
Bind fld g <*> fldx = Bind fld ((<*> fldx) . g)
fldf <*> Bind fld g = Bind fld ((fldf <*>) . g)
instance Monad (Fold e) where
return = pure
(>>=) = Bind
fold :: Fold e r -> [e] -> r
fold (Step _ acc ret) [] = ret acc
fold (Step step acc ret) (x:xs) = fold (Step step (step acc x) ret) xs
fold (Bind fld g) lst = fold (g $ fold fld lst) lst
monoidalFold :: Monoid m => (e -> m) -> (m -> r) -> Fold e r
monoidalFold f g = Step (\a -> mappend a . f) mempty g
count :: Num n => Fold e n
count = monoidalFold (const (Sum 1)) getSum
sum :: Num n => Fold n n
sum = monoidalFold Sum getSum
avgA :: Fold Double Double
avgA = liftA2 (/) sum count
avgM :: Fold Double Double
avgM = liftM2 (/) sum count
main :: IO ()
main = defaultMain
[ bench "Monadic" $ nf (test avgM) 1000000
, bench "Applicative" $ nf (test avgA) 1000000
] where test f n = fold f [1..n]
I wrote the above from the top of my head as an example so it might not be the optimal implementation for applicative and monadic folds, but running the above gives me:
benchmarking Monadic
mean: 119.3114 ms, lb 118.8383 ms, ub 120.2822 ms, ci 0.950
std dev: 3.339376 ms, lb 2.012613 ms, ub 6.215090 ms, ci 0.950
benchmarking Applicative
mean: 51.95634 ms, lb 51.81261 ms, ub 52.15113 ms, ci 0.950
std dev: 850.1623 us, lb 667.6838 us, ub 1.127035 ms, ci 0.950
这篇关于Monad部分的应用部分可以比Monad部分更优化的monad的示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!