本文介绍了Google Cloud DataFlow:ModuleNotFoundError:没有名为“main"的模块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在云 Dataflow 上运行 apache-beam 管道.原始函数已部署为一个云函数,该函数应该创建一个读取文本文件并插入到大查询中的 Dataflow 作业.但它无法在 Dataflow 上运行.下面给出了函数和错误.

I am trying to run an apache-beam pipeline on cloud Dataflow. The original function has been deployed as a cloud function which is supposed to create a Dataflow job that reads a text file and inserts into big query. But it fails to run on Dataflow. The function and error are given below.

from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import GoogleCloudOptions
from apache_beam.options.pipeline_options import StandardOptions
import apache_beam as beam

class Split(beam.DoFn):

    def process(self, element):

        element = element.split(',')

        return [{
            'field_1': element[0],
            'field_2': element[1],
            'field_3': element[2]}]

def main(data, context):

    options = PipelineOptions()

    google_cloud_options = options.view_as(GoogleCloudOptions)
    google_cloud_options.project = my-project
    google_cloud_options.job_name = job_name
    google_cloud_options.staging_location = staging_location
    google_cloud_options.temp_location = temp_location
    google_cloud_options.service_account_email = service_account_email
    options.view_as(StandardOptions).runner = 'DataflowRunner'

    p = beam.Pipeline(options=options)

    with p:
        (
            p
            | 'ReadData' >> beam.io.ReadFromText(gs://source_file_location)
            | 'ParseCSV' >> beam.ParDo(Split())
            | 'WriteToBigQuery' >> beam.io.WriteToBigQuery(table=bq_table,
                                                           schema=schema,
                                                           write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)
        )

    result = p.run()
    result.wait_until_finish()

if __name__ == '__main__':

    main('data', 'context')

我在 Dataflow 上遇到的错误是

The error that I get on Dataflow is

Error message from worker: Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py",
line 286, in loads return dill.loads(s) File "/usr/local/lib/python3.7/site-packages/dill/_dill.py",
line 275, in loads return load(file, ignore, **kwds) File "/usr/local/lib/python3.7/site-packages/dill/_dill.py",
line 270, in load return Unpickler(file, ignore=ignore, **kwds).load() File "/usr/local/lib/python3.7/site-packages/dill/_dill.py",
line 472, in load obj = StockUnpickler.load(self) File "/usr/local/lib/python3.7/site-packages/dill/_dill.py",
line 462, in find_class return StockUnpickler.find_class(self, module, name) ModuleNotFoundError:
No module named 'main' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 648, in do_work work_executor.execute() File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 176, in execute op.start() File "apache_beam/runners/worker/operations.py",
line 649, in apache_beam.runners.worker.operations.DoOperation.start File "apache_beam/runners/worker/operations.py",
line 651, in apache_beam.runners.worker.operations.DoOperation.start File "apache_beam/runners/worker/operations.py",
line 652, in apache_beam.runners.worker.operations.DoOperation.start File "apache_beam/runners/worker/operations.py",
line 261, in apache_beam.runners.worker.operations.Operation.start File "apache_beam/runners/worker/operations.py",
line 266, in apache_beam.runners.worker.operations.Operation.start File "apache_beam/runners/worker/operations.py",
line 597, in apache_beam.runners.worker.operations.DoOperation.setup File "apache_beam/runners/worker/operations.py",
line 602, in apache_beam.runners.worker.operations.DoOperation.setup File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py",
line 290, in loads return dill.loads(s) File "/usr/local/lib/python3.7/site-packages/dill/_dill.py",
line 275, in loads return load(file, ignore, **kwds) File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 270, in load return Unpickler(file, ignore=ignore, **kwds).load() File "/usr/local/lib/python3.7/site-packages/dill/_dill.py",
line 472, in load obj = StockUnpickler.load(self) File "/usr/local/lib/python3.7/site-packages/dill/_dill.py",
line 462, in find_class return StockUnpickler.find_class(self, module, name)
ModuleNotFoundError: No module named 'main'

推荐答案

您可能需要将您的代码(包括您的 DoFn)作为依赖项捆绑在一个单独的文件中;参见 https://beam.apache.org/documentation/sdks/python-管道依赖/

You may have to bundle up your code (including your DoFn) as a dependency in a separate file; see https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/

在这种情况下,听起来云函数从名为 main.py 的文件中执行您的文件;那会产生这种错误.我建议将您的代码打包为依赖项,这里的代码只是 from my_lib import main.

In this case, it sounds like cloud functions execs your file from a file named main.py; that would give this kind of error. I would suggest packaging up your code as a dependency and the code here would simply be from my_lib import main.

这篇关于Google Cloud DataFlow:ModuleNotFoundError:没有名为“main"的模块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 13:41