问题描述
我有一个关于这个拱门的 jdbc-sink 的问题.
i have an issue about jdbc-sink with this arch.
postgres1 ---> kafka ---> postgres2
生产者工作正常,但消费者有错误:
the producer working fine, but the consumer has an error :
connect_1 |org.apache.kafka.connect.errors.RetriableException:java.sql.SQLException: java.sql.BatchUpdateException: 批条目 0INSERT INTO "customers" ("id") VALUES (1) ON CONFLICT ("id") DO UPDATESET 已中止:错误:输入 connect_1 末尾出现语法错误 |
位置:77 调用 getNextException 查看批处理中的其他错误.
这是我的 source.json
this is my source.json
{
"name": "src-table",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "postgres1_container",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname": "postgres",
"database.whitelist": "postgres",
"database.server.name": "postgres1",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory",
"transforms": "route",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$3"
}
这是我的 jdbc-sink.json
and this my jdbc-sink.json
{
"name": "jdbc-sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "customers",
"connection.url": "jdbc:postgresql://postgres2_container:5432/postgres?user=postgres&password=postgres",
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.UnwrapFromEnvelope",
"auto.create": "true",
"insert.mode": "upsert",
"pk.fields": "id",
"pk.mode": "record_value"
}
}
debezium/zookeeper : 0.9
debezium/zookeeper : 0.9
debezium/kafka:0.9
debezium/kafka:0.9
debezium/postgres:9.6
debezium/postgres:9.6
debezium/connect:0.9
debezium/connect:0.9
PostgreSQL JDBC 驱动程序 42.2.5
PostgreSQL JDBC Driver 42.2.5
Kafka 连接 JDBC 5.2.1
Kafka Connect JDBC 5.2.1
我尝试降级 jdbc 驱动程序和融合的 kafka 连接,但仍然出现相同的错误
i tried to downgrade jdbc driver and confluent kafka connect but still have the same error
推荐答案
解决了这个问题,因为我在 postgres1 中创建表时,我没有将 id
设置为 PK 值
solve, the problem coz while i create a table in postgres1, i did not set the id
to a PK value
这篇关于最后kafka连接器jdbc-sink语法错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!