问题描述
这是我的代码的基本版本
为网址中的网址:
fp = urllib.urlopen(url)
lines = fp.readlines()
#print lines
for line in lines:
#print line
if(reUrl.search(line)):
打印''找到''
返回1
else:
打印''找不到''
返回0
偶尔挂起某些网址。如果我执行ctrl-c
(linux),它将转到下一个url。我如何才能暂停
并转到下一个网址。
This is a basic version of my code
for url in urls:
fp = urllib.urlopen(url)
lines = fp.readlines()
#print lines
for line in lines:
#print line
if(reUrl.search(line)):
print ''found''
return 1
else:
print ''not found''
return 0
this hangs occasionally for some certain url''s. If I do a ctrl-c
(linux) it will move on to the next url. How can I get this to timeout
and move on to the next url.
推荐答案
自2.3以来有socket.setdefaulttimeout(),这应该是作业:
导入套接字
socket.setdefaulttimeout(10)
#10s后抛出socket.timeout异常,
#默认是无限期等待(或至少非常非常长的时间......)
对于较旧的python版本,请谷歌搜索timeoutsocket.py
since 2.3 there''s socket.setdefaulttimeout(), this should to the job:
import socket
socket.setdefaulttimeout(10)
# throw socket.timeout exception after 10s,
# default is to wait a infinitly (or at least a very, very long time...)
For older python versions, ask google for timeoutsocket.py
这篇关于urllib挂起的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!