本文介绍了您可以使用Liquibase初始化Spring Batch元数据表吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我有如下设置.在本地运行批处理作业时,该作业将使用data-source属性值自动创建必要的元数据表,因为initialize-schema设置为始终. Liquibase还将运行并创建其更改日志中列出的任何表.

Currently I have a setup like below. On running the batch job locally the job will create the necessary metadata tables automatically using the data-source property values since initialize-schema is set to always. Liquibase will also run and create any tables listed in its changelog.

这是我的application.yml文件

spring:
  batch:
    initialize-schema: always
    job:
      enabled: true
  liquibase:
    url: db_url
    user: deploy_user
    password: deploy_pass
    change-log: classpath:db/changelog/db.changelog-master.yaml
    enabled: true
data-source:
  mysql:
    user: r_user
    password: r_pass
    jdbc-url: db_url

这是我的db.changelog-master.yaml文件.

databaseChangeLog:

  - changeSet:
    dbms: mysql
    id: create-sample-table
    author: me
    sql: CREATE TABLE sample_table (
      sample_id VARCHAR(255) NOT NULL,
      sample_text TEXT,
      PRIMARY KEY (samoke_id)
      ) ENGINE=InnoDB DEFAULT
      CHARSET=utf8 COLLATE=utf8_bin;

Mysql数据源配置:

Mysql datasource config:

@Configuration
public class DataSourceConfiguration {

    @Primary
    @Bean(name = "mySQLDataSource")
    @ConfigurationProperties("data-source.mysql")
    public DataSource mySQLDataSource() {
        return DataSourceBuilder.create().type(HikariDataSource.class).build();
    }
}

Liquibase配置(可能发布的内容超出所需数量):

Liquibase Configuration (probably posting more than what's needed):

@Configuration
@EnableConfigurationProperties(LiquibaseProperties.class)
public class LiquibaseConfiguration {
    private static final Logger LOG = LoggerFactory.getLogger(LiquibaseConfiguration.class);

    @Autowired
    private LiquibaseProperties liquibaseProperties;


    public DataSource liquibaseDataSource() {
        DataSourceBuilder factory = DataSourceBuilder
                .create()
                .url(liquibaseProperties.getUrl())
                .username(liquibaseProperties.getUser())
                .password(liquibaseProperties.getPassword());

        return factory.build();
    }

    public void testLiquibaseConnection() throws SQLException {

        LOG.info("Testing connection to Liquibase (in case PCF restarts and we have stale dynamic secrets)...");
        liquibaseDataSource().getConnection();
        LOG.info("Testing connection to Liquibase (in case PCF restarts and we have stale dynamic secrets)... Succeeded");
    }

    @Bean
    public SpringLiquibase liquibase() {
        try {
            testLiquibaseConnection();
        } catch (Exception ex) {
            LOG.warn("WARNING: Could not connect to the database using " + liquibaseProperties.getUser() + ", so we will be skipping the Liquibase Migration for now. ", ex);
            return null;
        }
        SpringLiquibase liquibase = new SpringLiquibase();
        liquibase.setChangeLog(this.liquibaseProperties.getChangeLog());
        liquibase.setContexts(this.liquibaseProperties.getContexts());
        liquibase.setDataSource(liquibaseDataSource());
        liquibase.setDefaultSchema(this.liquibaseProperties.getDefaultSchema());
        liquibase.setDropFirst(this.liquibaseProperties.isDropFirst());
        liquibase.setShouldRun(this.liquibaseProperties.isEnabled());
        liquibase.setLabels(this.liquibaseProperties.getLabels());
        liquibase.setChangeLogParameters(this.liquibaseProperties.getParameters());
        return liquibase;
    }

}

问题是我们在已部署环境中用于创建/部署表以及对表进行读/写操作时具有不同的凭据.因此,以下设置可通过Liquibase创建表,但由于部署时凭据不正确,因此无法创建元数据表.当前创建元数据表的解决方法是使用具有部署凭据的data-source属性进行部署,运行作业以初始化表,然后使用读/写凭据进行重新部署. (我们不能仅将部署凭据留给读取,因为它们的TTL很短).

The issue is we have different credentials for creating/deploying tables and reading/writing to tables in our deployed environments. So the below setup will work to create tables via Liquibase, but fail creating the metadata tables due to having the incorrect credentials upon deployment. Our current work-around to get the metadata tables created is to deploy with the data-source properties having deploy credentials, run the job to initialize the tables and then redeploy with read/write credentials. (We can't just leave the deploy credentials for reads because they have very short TTL).

是否可以通过Liquibase自动为Spring Batch创建元数据表?具体来说,是否无需将创建SQL手动添加到变更日志文件中?

Is it possible to create the metadata tables for Spring Batch via Liquibase automatically? Specifically, without adding the creation SQL manually to the changelog files?

更新:

在下面使用veljkost的答案获得一个看起来像这样的变更日志文件:

Using veljkost's answer below having a changelog file that looks like this works:

databaseChangeLog:
  - changeSet:
      dbms: mysql
      id: create-spring-batch-metadata
      author: dev.me
      changes:
        - sqlFile:
            encoding: UTF-8
            path: classpath:/org/springframework/batch/core/schema-mysql.sql
            relativeToChangelogFile: false
            splitStatements: true
            stripComments: true

推荐答案

是的,您可以引用Spring Batch项目中已经存在的架构文件.在org.springframework.batch.core包中,您可以找到schema-*.sql文件,其中*是目标数据库的名称.由于您在mysql上运行,因此更改集如下所示:

Yes, you can reference the schema files that already exist in Spring Batch project. In org.springframework.batch.core package you can find schema-*.sql files where * is the name of the targeted db. Since you are running on mysql, your change set would look something like this:

- changeSet:
  id: 1234
  author: adam.sandler
  changes:
      - sqlFile:
            encoding: utf8
            path: classpath:/org/springframework/batch/core/schema-mysql.sql
            relativeToChangelogFile: false
            splitStatements: true
            stripComments: true

这篇关于您可以使用Liquibase初始化Spring Batch元数据表吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-06 03:00