我有一个用例,其中我的KTable就是这样。

KTable:orderTable

核心价值

{123} : {id1,12}

{124} : {id2,10}

{125} : {id1,5}

{126} : {id2,11}

KTable:orderByIdTable =>该表将位于groupBy值field (id)上,并且计数列值的总和为id1=(12+5)id2=(10+11)

核心价值

{id1} : {17}

{id2} : {21}

         final KTable<String, Order> orderTable = builder.table("order-topic");
         Don't know how to do this further.....
         final KTable<String,Long> orderByIdTable = ?

最佳答案

这是一个代码示例(仅使用Java基本类型,这使我可以更快地放在一起),展示了如何对KTable进行密钥重新命名(又称为重新分区),从而生成新的KTable。您应该能够轻松地将其适应于将KTable<String, Order>转换为KTable<String, Long>的示例。

就个人而言,我会选择Variant 2作为您的用例。

下面的例子。未完全测试,可能是逻辑删除记录(具有非空键但值为空值的消息,表示应从表中删除该键)未正确处理。

final StreamsBuilder builder = new StreamsBuilder();
final KTable<Integer, String> table = builder.table(inputTopic, Consumed.with(Serdes.Integer(), Serdes.String()));

// Variant 1 (https://docs.confluent.io/current/streams/faq.html#option-1-write-kstream-to-ak-read-back-as-ktable)
// Here, we re-key the KTable, write the results to a new topic, and then re-read that topic into a new KTable.
table
    .toStream()
    .map((key, value) -> KeyValue.pair(value, key))
    .to(outputTopic1, Produced.with(Serdes.String(), Serdes.Integer()));
KTable<String, Integer> rekeyedTable1 =
    builder.table(outputTopic1, Consumed.with(Serdes.String(), Serdes.Integer()));

// Variant 2 (https://docs.confluent.io/current/streams/faq.html#option-2-perform-a-dummy-aggregation)
// Here, we re-key the KTable (resulting in a KGroupedTable), and then perform a dummy aggregation to turn the
// KGroupedTable into a KTable.
final KTable<String, Integer> rekeyedTable2 =
    table
        .groupBy(
            (key, value) -> KeyValue.pair(value, key),
            Grouped.with(Serdes.String(), Serdes.Integer())
        )
        // Dummy aggregation
        .reduce(
            (aggValue, newValue) -> newValue, /* adder */
            (aggValue, oldValue) -> oldValue  /* subtractor */
        );
rekeyedTable2.toStream().to(outputTopic2, Produced.with(Serdes.String(), Serdes.Integer()));

09-27 20:37