问题描述
我正在尽全力在事件源中进行交易.
I'm trying to wrap my head around transactions in event sourcing.
我的事件存储区中有一个聚合(事务作用域).
I have one aggregate (transaction scope) in my event store.
一个命令被处理并产生10个事件.现在,可以将其作为1个事务处理还是10个事务处理?对于事务,我的意思是更改仅在整体上有效的状态.即使我希望将事件作为一个整体来处理,如果将它们分成许多这样的事件,我是否将事件设计为错误?
A command gets processed and is producing 10 events. Now, can this be handled as 1 transaction or is this 10 transactions? With transaction I mean changes to the state that is only valid together as a whole. Have I designed my events wrong if they are split up into many events like this even though I want them to be handled as a whole?
我倾向于认为,正是该命令定义了事务,意图,并且该命令产生的所有事件都应作为一个整体处理.这意味着它们仅应作为一个整体持久保存,作为一个整体加载,对读者整体(原子上)可见,并且仅在整个事件总线上发送给侦听器.
I tend to think that it is the command that is what is defining the transaction, the intent, and that all events produced by that command should be handled together as a whole. Meaning that they should only be persisted as a whole, loaded as a whole, visible to readers as a whole (atomically) and also only sent to listeners on my event bus as a whole.
这是正确的想法吗?
例如在Kafka和Event Store中如何处理?
How is this handled in for instance Kafka and Event Store?
那会产生很多事件的命令呢,那真的是很好的设计吗?我希望发生一些事情(命令)和某些事情发生(事件),发生的事情并不多???我想拥有这种1:1关系,但是我在这里读到那里,命令应该能够产生许多事件,但是为什么呢?
And what about commands that produce many events, is that really good design? I want something to happen (command) and something happened (event), not many things happened?? I'd like to have this 1:1-relationship but I read here and there that commands should be able to produce many events, but why?
对不起,我希望有人能得到我在这里要问的问题.
Sorry for the rambling, I hope somebody get what I'm trying to ask here.
推荐答案
作为写操作,通常将其建模为单个事务;将整个 commit 添加到历史记录中,或者什么都没有.
As a write, this is normally modeled as a single transaction; either the entire commit is added to the history, or nothing is.
事物的 read 方面可能会有些棘手.毕竟,事件只是事件.作为消费者,我什至对所有产品都不感兴趣,并且可能有商业价值,那就是尽快消费它们,而不是等待一切都按顺序出现.
The read side of things can be a bit trickier. After all, events are just events; as a consumer, I may not even be interested in all of them, and there may be business value in consuming them as quickly as possible rather than waiting for everything to show up in order.
对于顺序很重要的消费者,在这种情况下,您将阅读信息流,而不是事件.但是,在消费者中仍然存在与批处理/分页有关的问题,这与在提交边界上对齐所有工作的目标相冲突.
For consumers where the ordering is significant, in those cases you'll be reading the stream, rather than events. But it's still the case that you may have batching/paging concerns in the consumer that conflict with the goal of aligning all work on a commit boundary.
要记住的一点是,从读者的角度来看,没有不变式需要维护.事件流只是发生的一系列事情.
A thing to keep in mind is that, from the point of view of the readers, there are no invariants to maintain. The event stream is just a sequence of things that happened.
唯一真正关键的情况是写程序试图加载聚合状态的情况.在这种情况下,您需要整个流,并且提交边界基本上是不相关的.
The only really critical case is that of a writer, trying to load the aggregate state; in which case you need the entire stream, and the commit boundaries are basically irrelevant.
在Greg Young的活动存储区中,写到stream 表示将事件的有序集合复制到流中的指定位置.整个块都进来了,或者根本没有进来.
In Greg Young's Event Store, writing to a stream means copying to the specified location in the stream an ordered collection of events. The whole block comes in, or not at all.
从流中读取包括分页支持-允许客户端请求一系列在提交边界之间的事件.该文档不能保证在这种情况下会发生什么.碰巧的是,返回的表示形式可以支持返回的事件少于可用的事件,因此可能总是在提交边界上返回事件.
Reading from a stream includes paging support -- the client is allowed to ask for a run of events that fall between commit boundaries. The documentation offers no guarantees of what happens in that case. As it happens, the representation returned can support returning fewer events than are available, so it could be that the events are always returned on commit boundaries.
通过阅读源代码,我的理解是,用于在磁盘上存储流的持久性结构确实不尝试保留提交边界-但在这一点上我肯定会弄错.
My understanding from reading the source code is that the persistence structure used to store the streams on disk does not attempt to preserve commit boundaries - but I could certainly be mistaken on this point.
有两个原因.
首先,骨料是人造的;它们是我们用来确保日志中数据完整性的一致性边界.但是给定的聚合可以组成许多实体(例如,在低竞争级别下,将整个域模型放入单个聚合"并没有天生的错误),并且将对不同实体的更改彼此区别对待通常很有用. . (另一种方法类似于每次编写一个 SomethingChanged 事件,并坚持要求所有客户端都消费该事件以查明发生了什么.)
First, aggregates are artificial; they are the consistency boundaries that we use to ensure the integrity of the data in the journal. But a given aggregate can compose many entities (for instance, at low contention levels there's nothing inherently wrong about putting your entire domain model into a single "aggregate"), and it is often useful to treat changes to different entities as distinct from one another. (The alternative is analogous to writing a SomethingChanged event every time, and insisting that all clients consume the event to find out what happened).
第二,重新建立域不变式通常是与命令中指定的操作分开的操作.鲍勃只是从他的帐户中提取了比他可用的更多的现金.我们需要更新他的帐户分类帐 并将问题升级为人类.哦,这是一条新命令,描述了Bob在当天早些时候进行的一笔存款,我们需要更新他的帐户分类帐 并告诉人们退位.
Second, re-establishing the domain invariant is frequently a separate action from the action specified in the command. Bob just withdrew more cash from his account than was available to him; we need to update his account ledger and escalate the problem to a human being. Oh, here's a new command describing a deposit Bob made earlier in the day, we need to update his account ledger and tell the human being to stand down.
但是从广义上讲,因为区分命令的多种后果更符合业务的自然语言.
But broadly, because distinguishing the multiple consequences of a command better aligns with the natural language of the business.
这篇关于事件源中的事务处理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!