InputFormat创建Elasticsearch的输入格式

InputFormat创建Elasticsearch的输入格式

本文介绍了使用Flink Rich InputFormat创建Elasticsearch的输入格式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在使用Elasticsearch 6.8.4和Flink 1.0.18.

We are using Elasticsearch 6.8.4 and Flink 1.0.18.

我们在elasticsearch中有一个带有1个分片和1个副本的索引,我想创建自定义输入格式,以使用具有超过1个输入分割的apache Flink数据集API在Elasticsearch中读写数据,以实现更好的性能.那有什么办法可以达到这个要求?

We have an index with 1 shard and 1 replica in elasticsearch and I want to create the custom input format to read and write data in elasticsearch using apache Flink dataset API with more than 1 input splits in order to achieve better performance. so is there any way I can achieve this requirement?

注意:每个文档的大小较大(将近8mb),并且由于尺寸限制,一次只能读取10个文档,并且每个读取请求都希望检索500k条记录.

Note: Per document size is larger(almost 8mb) and I can read only 10 documents at a time because of size constraint and per reading request, we want to retrieve 500k records.

据我了解,并行性的数量应等于数据源的分片/分区的数量.但是,由于我们仅存储少量数据,所以分片数量仅保持为1,而我们拥有的静态数据每个月的增加很少.

As per my understanding, no.of parallelism should be equal to number of shards/partitions of the data source. however, since we store only a small amount of data we have kept the number of shard as only 1 and we have a static data it gets increased very slightly per month.

任何帮助或源代码示例将不胜感激.

Any help or example of source code will be much appreciated.

推荐答案

您需要能够生成对ES的查询,以将源数据有效地划分为相对相等的块.然后,您可以使用并行度>运行您的输入源.1,并使每个子任务仅读取部分索引数据.

You need to be able to generate queries to ES that effectively partition your source data into relatively equal chunks. Then you can run your input source with a parallelism > 1, and have each sub-task read only part of the index data.

这篇关于使用Flink Rich InputFormat创建Elasticsearch的输入格式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-24 03:14