本文介绍了如何从Python中的ftp同时下载一些文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是Python编程的新手。
我的问题是,如何同时下载几个文件。不是文件,而是同时从ftp上的一个目录。现在我使用这个脚本,但是我不知道如何重建这个代码:

  filenames = [] 
ftp.retrlines(NLST,filenames.append)
打印文件名
打印路径
文件名中的文件名:
local_filename = filename
打印文件名
打印local_filename
f = open(local_filename,wb)

s = ftp.size(local_filename)
sMB = s /(1024 * 1024)
打印名称:+ local_filename +\\\
file size:+ str(sMB)+MB
ftp.retrbinary(RETR%s%local_filename,f.write)
print\ n完成:)
time.sleep(2)
f.close()
ftp.quit()#closing连接
time.sleep(5)

它工作正常,但不是我需要的。

解决方案

您可以使用多个线程或进程。确保在每个线程中创建一个新的 ftplib.FTP 对象。最简单的方法是使用 multiprocessing.Pool

  from multiprocessing.dummy import Pool#使用线程
尝试:
从urllib import urlretrieve
除了ImportError:#Python 3
from urllib.request import urlretrieve

def download(url):
url = url.strip()
try:
return urlretrieve(url,url2filename( url)),$ $ $ $ $ $ $ $ $ $ $


$ b如果__name__ ==__main__:
p =池(20)#指定号码并发下载
print(p.map(download,open('urls')))#执行并行下载

其中 urls 包含要下载的文件的ftp URL,例如 ftp://example.com/path/to/file 和从url中提取文件名部分,例如:

  import os 
import posixpath
try:
from urlparse import urlsplit
from urllib import unquote
除了ImportError:#Python 3
from urllib.parse import urlsplit,unquote

def url2filename(url,encoding ='utf-8'):
返回基本名称对应于url。

>>>打印url2filename('http://example.com/path/to/dir%2Ffile%C3%80?opt=1')
fileÀ

urlpath = urlsplit(url ).path
basename = posixpath.basename(unquote(urlpath))
如果os.path.basename(basename)!= basename:
raise ValueError(url)#reject'dir%5Cbasename Windows上的.ext'
返回basename


I'm a newbie in Python programming.My question is, how to download a few files at the same time. Not file by file but simultaneously from one directory on ftp. Now I use this script but I don't know how I can rebuild this code:

  filenames = []
    ftp.retrlines("NLST", filenames.append)
    print filenames
    print path
    for filename in filenames:
        local_filename = filename
        print filename
        print local_filename
        f = open(local_filename, "wb")

        s = ftp.size(local_filename)
        sMB = s/(1024*1024)
        print "file name: " + local_filename + "\nfile size: " + str(sMB) + " MB"
        ftp.retrbinary("RETR %s" % local_filename, f.write)
    print "\n Done :) "
    time.sleep(2)
    f.close()
    ftp.quit() #closing connection
    time.sleep(5)

It works fine, but not what I need.

解决方案

You could use multiple threads or processes. Make sure you create a new ftplib.FTP object in each thread. The simplest way (code-wise) is to use multiprocessing.Pool:

#!/usr/bin/env python
from multiprocessing.dummy import Pool # use threads
try:
    from urllib import urlretrieve
except ImportError: # Python 3
    from urllib.request import urlretrieve

def download(url):
    url = url.strip()
    try:
        return urlretrieve(url, url2filename(url)), None
    except Exception as e:
        return None, e

if __name__ == "__main__":
   p = Pool(20) # specify number of concurrent downloads
   print(p.map(download, open('urls'))) # perform parallel downloads

where urls contains ftp urls for the files to download e.g., ftp://example.com/path/to/file and url2filename() extracts the filename part from an url e.g.:

import os
import posixpath
try:
    from urlparse import urlsplit
    from urllib import unquote
except ImportError: # Python 3
    from urllib.parse import urlsplit, unquote

def url2filename(url, encoding='utf-8'):
    """Return basename corresponding to url.

    >>> print url2filename('http://example.com/path/to/dir%2Ffile%C3%80?opt=1')
    fileÀ
    """
    urlpath = urlsplit(url).path
    basename = posixpath.basename(unquote(urlpath))
    if os.path.basename(basename) != basename:
        raise ValueError(url)  # reject 'dir%5Cbasename.ext' on Windows
    return basename

这篇关于如何从Python中的ftp同时下载一些文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-17 03:58
查看更多