I'm rewriting Dumb Guy's code below using modern Python modules like threading and Queue.import threading, urllib2import Queueurls_to_load = ['http://stackoverflow.com/','http://slashdot.org/','http://www.archive.org/','http://www.yahoo.co.jp/',]def read_url(url, queue): data = urllib2.urlopen(url).read() print('Fetched %s from %s' % (len(data), url)) queue.put(data)def fetch_parallel(): result = Queue.Queue() threads = [threading.Thread(target=read_url, args = (url,result)) for url in urls_to_load] for t in threads: t.start() for t in threads: t.join() return resultdef fetch_sequencial(): result = Queue.Queue() for url in urls_to_load: read_url(url,result) return resultfind_sequencial() 的最佳时间是 2 秒.fetch_parallel() 的最佳时间是 0.9 秒.Best time for find_sequencial() is 2s. Best time for fetch_parallel() is 0.9s.此外,由于 GIL,说 thread 在 Python 中没有用也是不正确的.这是线程在 Python 中很有用的情况之一,因为线程在 I/O 上被阻塞.正如您在我的结果中看到的,并行案例的速度提高了 2 倍.Also it is incorrect to say thread is useless in Python because of GIL. This is one of those case when thread is useful in Python because the the threads are blocked on I/O. As you can see in my result the parallel case is 2 times faster. 这篇关于Python urllib2.urlopen() 很慢,需要更好的方法来读取几个url的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云! 07-31 13:52