我正在为Python编写C++扩展,并且正在使用distutils编译项目。随着项目的发展,重建工作将花费越来越长的时间。有没有办法加快构建过程?

我读到distutils无法进行并行构建(与make -j一样)。有没有比distutils好的替代品呢?

我还注意到,即使我只更改了一个源文件,每次调用python setup.py build时,它都会重新编译所有目标文件。是这种情况,还是我在这里做错了?

如果有帮助,这里是我尝试编译的一些文件:https://gist.github.com/2923577

谢谢!

最佳答案

  • 尝试使用环境变量CC="ccache gcc"进行构建,当源未更改时,这将显着加快构建速度。 (奇怪的是,distutils也将CC用于c++源文件)。当然,请安装ccache软件包。
  • 因为您有一个扩展名,它是由多个编译的目标文件组装而成的,所以您可以使用猴子补丁distutils并行编译它们(它们是独立的)-将其放入您的setup.py(在您调整N=2时进行调整)希望):
    # monkey-patch for parallel compilation
    def parallelCCompile(self, sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, extra_postargs=None, depends=None):
        # those lines are copied from distutils.ccompiler.CCompiler directly
        macros, objects, extra_postargs, pp_opts, build = self._setup_compile(output_dir, macros, include_dirs, sources, depends, extra_postargs)
        cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
        # parallel code
        N=2 # number of parallel compilations
        import multiprocessing.pool
        def _single_compile(obj):
            try: src, ext = build[obj]
            except KeyError: return
            self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
        # convert to list, imap is evaluated on-demand
        list(multiprocessing.pool.ThreadPool(N).imap(_single_compile,objects))
        return objects
    import distutils.ccompiler
    distutils.ccompiler.CCompiler.compile=parallelCCompile
    
  • 为了完整起见,如果您具有多个扩展名,则可以使用以下解决方案:
    import os
    import multiprocessing
    try:
        from concurrent.futures import ThreadPoolExecutor as Pool
    except ImportError:
        from multiprocessing.pool import ThreadPool as LegacyPool
    
        # To ensure the with statement works. Required for some older 2.7.x releases
        class Pool(LegacyPool):
            def __enter__(self):
                return self
    
            def __exit__(self, *args):
                self.close()
                self.join()
    
    def build_extensions(self):
        """Function to monkey-patch
        distutils.command.build_ext.build_ext.build_extensions
    
        """
        self.check_extensions_list(self.extensions)
    
        try:
            num_jobs = os.cpu_count()
        except AttributeError:
            num_jobs = multiprocessing.cpu_count()
    
        with Pool(num_jobs) as pool:
            pool.map(self.build_extension, self.extensions)
    
    def compile(
        self, sources, output_dir=None, macros=None, include_dirs=None,
        debug=0, extra_preargs=None, extra_postargs=None, depends=None,
    ):
        """Function to monkey-patch distutils.ccompiler.CCompiler"""
        macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
            output_dir, macros, include_dirs, sources, depends, extra_postargs
        )
        cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
    
        for obj in objects:
            try:
                src, ext = build[obj]
            except KeyError:
                continue
            self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
    
        # Return *all* object filenames, not just the ones we just built.
        return objects
    
    
    from distutils.ccompiler import CCompiler
    from distutils.command.build_ext import build_ext
    build_ext.build_extensions = build_extensions
    CCompiler.compile = compile
    
  • 关于c++ - 使用distutils加快构建过程,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/11013851/

    10-11 21:04