我正在玩spring gcp项目。我的first example with GCP bucket正常工作,并使用了我在属性文件中指出的正确的Google帐户:

spring.cloud.gcp.credentials.location=file:secret.json


在下一步中,我尝试重复bigQuery example

为此,我在GCP端创建了数据集,并将数据集名称添加到属性文件中:

spring.cloud.gcp.bigquery.datasetName=my_dataset


我也复制了控制器:

@Controller
public class BigQueryController {
    @Autowired
    BigQuerySampleConfiguration.BigQueryFileGateway bigQueryFileGateway;

    @Autowired
    BigQueryTemplate bigQueryTemplate;

    @Value("${spring.cloud.gcp.bigquery.datasetName}")
    private String datasetName;

    @GetMapping("/bigquery")
    public ModelAndView renderIndex(ModelMap map) {
        map.put("datasetName", this.datasetName);
        return new ModelAndView("index.html", map);
    }

    /**
     * Handles a file upload using {@link BigQueryTemplate}.
     *
     * @param file      the CSV file to upload to BigQuery
     * @param tableName name of the table to load data into
     * @return ModelAndView of the response the send back to users
     * @throws IOException if the file is unable to be loaded.
     */
    @PostMapping("/uploadFile")
    public ModelAndView handleFileUpload(
            @RequestParam("file") MultipartFile file, @RequestParam("tableName") String tableName)
            throws IOException {

        ListenableFuture<Job> loadJob = this.bigQueryTemplate.writeDataToTable(
                tableName, file.getInputStream(), FormatOptions.csv());

        return getResponse(loadJob, tableName);
    }

    /**
     * Handles CSV data upload using Spring Integration {@link BigQuerySampleConfiguration.BigQueryFileGateway}.
     *
     * @param csvData   the String CSV data to upload to BigQuery
     * @param tableName name of the table to load data into
     * @return ModelAndView of the response the send back to users
     */
    @PostMapping("/uploadCsvText")
    public ModelAndView handleCsvTextUpload(
            @RequestParam("csvText") String csvData, @RequestParam("tableName") String tableName) {

        ListenableFuture<Job> loadJob = this.bigQueryFileGateway.writeToBigQueryTable(csvData.getBytes(), tableName);

        return getResponse(loadJob, tableName);
    }

    private ModelAndView getResponse(ListenableFuture<Job> loadJob, String tableName) {
        String message;
        try {
            Job job = loadJob.get();
            message = "Successfully loaded data file to " + tableName;
        } catch (Exception e) {
            e.printStackTrace();
            message = "Error: " + e.getMessage();
        }

        return new ModelAndView("index")
                .addObject("datasetName", this.datasetName)
                .addObject("message", message);
    }
}


和配置:

@Configuration
public class BigQuerySampleConfiguration {

    @Bean
    public DirectChannel bigQueryWriteDataChannel() {
        return new DirectChannel();
    }

    @Bean
    public DirectChannel bigQueryJobReplyChannel() {
        return new DirectChannel();
    }

    @Bean
    @ServiceActivator(inputChannel = "bigQueryWriteDataChannel")
    public MessageHandler messageSender(BigQueryTemplate bigQueryTemplate) {
        BigQueryFileMessageHandler messageHandler = new BigQueryFileMessageHandler(bigQueryTemplate);
        messageHandler.setFormatOptions(FormatOptions.csv());
        messageHandler.setOutputChannel(bigQueryJobReplyChannel());
        return messageHandler;
    }

    @Bean
    public GatewayProxyFactoryBean gatewayProxyFactoryBean() {
        GatewayProxyFactoryBean factoryBean = new GatewayProxyFactoryBean(BigQueryFileGateway.class);
        factoryBean.setDefaultRequestChannel(bigQueryWriteDataChannel());
        factoryBean.setDefaultReplyChannel(bigQueryJobReplyChannel());
        // Ensures that BigQueryFileGateway does not return double-wrapped ListenableFutures
        factoryBean.setAsyncExecutor(null);
        return factoryBean;
    }

    /**
     * Spring Integration gateway which allows sending data to load to BigQuery through a
     * channel.
     */
    @MessagingGateway
    public interface BigQueryFileGateway {
        ListenableFuture<Job> writeToBigQueryTable(
                byte[] csvData, @Header(BigQuerySpringMessageHeaders.TABLE_NAME) String tableName);
    }

}


index.html(我不认为我应该在这里复制它)

但是,当我尝试将smth写入bigQuery数据集时,出现以下错误:

2020-03-03 15:01:32.147 ERROR 16224 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.google.cloud.bigquery.BigQueryException: 404 Not Found
{
  "error": {
    "code": 404,
    "message": "Not found: Dataset my_production_project:my_dataset",
    "errors": [
      {
        "message": "Not found: Dataset my_production_project:my_dataset",
        "domain": "global",
        "reason": "notFound",
        "debugInfo": "[NOT_FOUND] time: 2020-03-03T04:01:31.971-08:00, Reason: code=NOT_FOUND message=Dataset bx-dev-contactmatch:my_dataset not found debug=time: 2020-03-03T04:01:31.971-08:00 errorProto=domain: \"cloud.helix.ErrorDomain\"\ncode: \"NOT_FOUND\"\nargument: \"Dataset\"\nargument: \"bx-dev-contactmatch:my_dataset\"\ndebug_info: \"time: 2020-03-03T04:01:31.971-08:00\"\n\n\tat com.google.cloud.helix.common.Exceptions$Public.resourceNotFound(Exceptions.java:281)\n\tat com.google.cloud.helix.common.Exceptions$Public.resourceNotFound(Exceptions.java:285)\n\tat com.google.cloud.helix.server.metadata.DatasetTrackerSpanner.lambda$getDatasetAsync$1(DatasetTrackerSpanner.java:236)\n\tat com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:213)\n\tat com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:202)\n\tat com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:118)\n\tat com.google.common.util.concurrent.MoreExecutors$5$1.run(MoreExecutors.java:1158)\n\tat com.google.common.context.ContextRunnable.runInContext(ContextRunnable.java:89)\n\tat com.google.common.context.ContextRunnable$1.run(ContextRunnable.java:78)\n\tat io.grpc.Context.run(Context.java:602)\n\tat com.google.tracing.GenericContextCallback.runInInheritedContext(GenericContextCallback.java:75)\n\tat com.google.common.context.ContextRunnable.run(ContextRunnable.java:74)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"
      }
    ],
    "status": "NOT_FOUND"
  }
}
] with root cause

com.google.api.client.http.HttpResponseException: 404 Not Found
{
  "error": {
    "code": 404,
    "message": "Not found: Dataset my_production_project:my_dataset",
    "errors": [
      {
        "message": "Not found: Dataset my_production_project:my_dataset",
        "domain": "global",
        "reason": "notFound",
        "debugInfo": "[NOT_FOUND] time: 2020-03-03T04:01:31.971-08:00, Reason: code=NOT_FOUND message=Dataset bx-dev-contactmatch:my_dataset not found debug=time: 2020-03-03T04:01:31.971-08:00 errorProto=domain: \"cloud.helix.ErrorDomain\"\ncode: \"NOT_FOUND\"\nargument: \"Dataset\"\nargument: \"bx-dev-contactmatch:my_dataset\"\ndebug_info: \"time: 2020-03-03T04:01:31.971-08:00\"\n\n\tat com.google.cloud.helix.common.Exceptions$Public.resourceNotFound(Exceptions.java:281)\n\tat com.google.cloud.helix.common.Exceptions$Public.resourceNotFound(Exceptions.java:285)\n\tat com.google.cloud.helix.server.metadata.DatasetTrackerSpanner.lambda$getDatasetAsync$1(DatasetTrackerSpanner.java:236)\n\tat com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:213)\n\tat com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:202)\n\tat com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:118)\n\tat com.google.common.util.concurrent.MoreExecutors$5$1.run(MoreExecutors.java:1158)\n\tat com.google.common.context.ContextRunnable.runInContext(ContextRunnable.java:89)\n\tat com.google.common.context.ContextRunnable$1.run(ContextRunnable.java:78)\n\tat io.grpc.Context.run(Context.java:602)\n\tat com.google.tracing.GenericContextCallback.runInInheritedContext(GenericContextCallback.java:75)\n\tat com.google.common.context.ContextRunnable.run(ContextRunnable.java:74)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"
      }
    ],
    "status": "NOT_FOUND"
  }
}


从错误中我们可以看到,应用程序尝试访问意外的my_production_project

secret.json内容:

{
  "type": "service_account",
  "project_id": "spring-samples-269912",
  "private_key_id": "04d22c73e3ef53dd82f20c322f91a79e2fbc76d9",
  "private_key": "-----BEGIN PRIVATE KEY-----******-----END PRIVATE KEY-----\n",
  "client_email": "spring-samples-service-account@spring-samples-269912.iam.gserviceaccount.com",
  "client_id": "117486490087851987327",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/spring-samples-service-account%40spring-samples-269912.iam.gserviceaccount.com"
}


如您所见,这里提到了spring-samples-269912项目。

我该如何解决?

附言

两个示例(Gcp存储桶和BigQuery)都位于同一项目中,因此它们使用相同的application.properties文件和相同的secret.json

最佳答案

当我设置时问题就消失了

spring.cloud.gcp.bigquery.project-id=spring-samples-269912


要么

spring.cloud.gcp.project-id=spring-samples-269912

关于java - spring-cloud-gcp-starter-bigquery会从属性文件中忽略spring.cloud.gcp.credentials.location,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/60506960/

10-10 16:35