我对骆驼的交易管理有疑问。

在我的示例中,我从ZIP中提取XML文件,通过JAXB在xml中创建JPA实体,然后将其写入数据库。
然后,我强制执行RuntimeException。

我的期望是插入的实体将被回滚,但是它们已经被提交。

我在ZIP拆分器上进行了交易处理,以便所有包含的文件都被处理或不处理。
聚合器负责将不同文件中的元数据组合起来,然后再将其写入数据库。

有人可以解释一下代码中缺少的内容还是我对Camel的交易管理的误解所在?

先感谢您

阿德里安

...

  private final class AggregationStrategyImplementation implements AggregationStrategy {
        DocumentMetaDataContainer documentMetaDataContainer;

        public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
            if (oldExchange == null) {
                documentMetaDataContainer = (DocumentMetaDataContainer) newExchange.getIn()
                        .getBody();
                oldExchange = newExchange;
            } else {
                String header = String.valueOf(newExchange.getIn().getHeader("CamelFileName"));
                System.out.println("aggregating " + header);
                documentMetaDataContainer.putFileStreamToSpecificElement(header, newExchange
                        .getIn().getBody(byte[].class));

                if (isDone()) {
                    oldExchange.getOut().setHeader(Exchange.AGGREGATION_COMPLETE_ALL_GROUPS, true);
                }
            }
            return oldExchange;
        }

        public boolean isDone() {
            if (documentMetaDataContainer == null)
                return false;
            for (DocumentMetaData documentMetaData : documentMetaDataContainer
                    .getListOfDocumentMetaData()) {
                if (documentMetaData.getDocumentFile() == null)
                    return false;
            }
            return true;
        }
    }


    public void processImport() throws Exception {
        ApplicationContext springContext = new ClassPathXmlApplicationContext(
                "classpath:spring.xml");
      final CamelContext camelContext = SpringCamelContext.springCamelContext(springContext);

      RouteBuilder routeBuilder = new RouteBuilder() {
          private static final String inbox = "/Development/ppaFiles";

          JAXBContext jaxbContext = JAXBContext.newInstance(new Class[] {
                  com.business.services.DocumentMetaDataContainer.class
                  });
          DataFormat jaxbDataFormat = new JaxbDataFormat(jaxbContext);

          public void configure() {
              from("file:"+inbox+"?consumer.delay=1000&noop=true")
                  .routeId("scanDirectory")
                  .choice()
                      .when(header("CamelFileName").regex("badb_(.)*.zip"))
                          .setHeader("msgId").simple("${header.CamelFileName}_${date:now:S}")
                          .log("processing zip file, aggregating by ${header.msgId}")
                          .to("direct:splitZip")
                  .end();


              from("direct:splitZip")
                  .routeId("splitZip")
                  .transacted()
                  .split(new ZipSplitter())
                  .streaming()
                  .choice()
                      .when(header("CamelFileName").regex("(.)*_files.xml")) // Meta File
                          .to("direct:toDocumentMetaData")
                      .otherwise() // PDF XML Files
                          .to("direct:toByteArray")
                          .end();

              from("direct:toByteArray")
                  .routeId("toByteArray")
                  .convertBodyTo(byte[].class)
                  .to("direct:aggregateZipEntries");


              from("direct:toDocumentMetaData")
                  .routeId("toDocumentMetaData")
                  .split()
                      // root tag name in xml file
                      .tokenizeXML("files")
                  .unmarshal(jaxbDataFormat)
                  .to("direct:aggregateZipEntries");


              from("direct:aggregateZipEntries")
                  .routeId("aggregateZipEntries")
                   // force to start with meta data file ('..._files.xml')
                  .resequence(simple("${header.CamelFileName.endsWith('_files.xml')}"))
                      .allowDuplicates()
                      .reverse()
                  .aggregate(new AggregationStrategyImplementation())
                      .header("msgId")
                      .completionTimeout(2000L)
                  .multicast()
                      .to("direct:saveDocumentMetaData", "direct:doErrorProcessing");


              from("direct:saveDocumentMetaData")
                  .routeId("saveDocumentMetaData")
                  .split(simple("${body.listOfDocumentMetaData}"))
                  .multicast()
                  .to("jpa://com.business.persistence.entities.DocumentMetaData"+
                            "?persistenceUnit=persistenceUnit"+
                            "&consumer.transacted=true"+
                            "&transactionManager=#transactionManager"+
                            "&flushOnSend=false")
                  .log("processDocumentMetaData: ${body.getName()}");


              from("direct:doErrorProcessing")
                  .routeId("doErrorProcessing")
                  .process(new Processor() {
                      public void process(Exchange exchange) throws Exception {
                              throw new RuntimeException("Error");
                      }
                  });
          }
      };

      camelContext.addRoutes(routeBuilder);

      camelContext.start();
      Thread.sleep(10000);
      camelContext.stop();
  }

  ...

最佳答案

似乎在“ splitZip”路由中创建的事务直到“ saveDocumentMetaData”和“ doErrorProcessing” perhaps due to using an aggregator without a persistent store中的保存事件才扩展。这就是为什么在“ doErrorProcessing”中引发的异常不会导致在“ saveDocumentMetaData”中回滚的原因。

要将“ saveDocumentMetaData”和“ doErrorProcessing”包含在一个事务中,请为多播创建一个新的事务:

// ...
  .aggregate(new AggregationStrategyImplementation())
    .header("msgId")
    .completionTimeout(2000L)
    .to("direct:persist");
// new transacted route
from("direct:persist")
  .routeId("persist")
  .transacted()
  .multicast()
    .to("direct:saveDocumentMetaData", "direct:importBalanceSheet");

09-11 18:08