本文介绍了我如何加快速度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个脚本,我认为它非常基本,可以称它为你,但平均需要至少6秒......是否有可能加快速度? $ date变量仅用于计时代码,并且不会为其花费的时间添加任何重要内容。我设置了两个定时标记,每个定时标记之间大约3秒钟。下面的示例URL用于测试

  $ date = date('m / d / Y h:i:s a',time )); 

echo开始计时$ date< br />< br />;

包括('simple_html_dom.php');

函数getUrlAddress()
{
$ url = $ _SERVER ['HTTPS'] =='on'? 'https':'http';
return $ url。'://'.$_SERVER ['HTTP_HOST']。$ _ SERVER ['REQUEST_URI'];


$ date = date('m / d / Y h:i:s a',time()); echo< br />< br />在geturl $ date< br />< br />

$ parts = explode(/,$ url);

$ html = file_get_html($ url);

$ date = date('m / d / Y h:i:s a',time()); echo< br />< br />后file_get_url $ date< br />< br />;

$ file_string = file_get_contents($ url);
preg_match('/< title>(。*)< \ / title> / i',$ file_string,$ title);
$ title_out = $ title [1];

foreach($ html-> find('img')as $ e){

$ image = $ e-> src;

if(preg_match(/ orangeBlue /,$ image)){$ image =''; }

if(preg_match(/ BeaconSprite /,$ image)){$ image =''; }
$ b $ if($ image!=''){

if(preg_match(/ http /,$ image)){$ image = $ image; }

elseif(preg_match(* // *,$ image)){$ image ='http:'。$ image; }
$ b $ else {$ image = $ parts ['0']。//。$ parts [1]。$ parts [2]。/。$ image; }

$ size = getimagesize($ image); ($($ size [0]> 110)&&($ size [1]> 110)){
if(preg_match(/ http /,$ image)) {$ image = $ image; }
echo'< img src ='。$ image。'>< br>';




$ date = date('m / d / Y h:i:s a',time()); echo< br />< br />计时结束$ date< br />< br />;

示例URL

更新



这实际是什么时间标记显示:



开始计时01/24/2012上午12:31:50



之后geturl 2012年1月24日上午12:31:50



之后file_get_url 01/24/2012 12:31:53 am



截止时间01/24/2012 12:31:57 am

  http://www.ebay.co.uk/itm/Duke-Nukem-Forever-XBOX-360-Game-BRAND-NEW-SEALED-UK-PAL -UK-Seller- / 170739972246?pt = UK_PC_Video_Games_Video_Games_JS& hash = item27c0e53896` 


解决方案 div>

它可能是getimagesize函数 - 它正在读取页面上的所有图像,以便确定其大小。也许你可以使用curl写一些东西来获取仅用于Content-size的头文件(不过,实际上,这可能是getimagesize所做的)。

无论如何,返回在我写几个蜘蛛的那一天,这样做很慢,网络速度比以往任何时候都要好,但它仍然是每个元素的获取。我甚至不关心图像。


I have a script which I think is pretty basic scraping, call it what you will, but it takes on average at least 6 seconds...is it possible to speed it up? The $date variables are only there for timing the code and don't add anything significant to the time it takes. I have set two timing markers and each is approx 3 seconds between. Example URL below for testing

$date = date('m/d/Y h:i:s a', time());

echo "start of timing $date<br /><br />";

include('simple_html_dom.php');

function getUrlAddress()
{
$url = $_SERVER['HTTPS'] == 'on' ? 'https' : 'http';
return $url .'://'.$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'];
}

$date = date('m/d/Y h:i:s a', time());  echo "<br /><br />after geturl $date<br /><br />";

$parts = explode("/",$url);

$html = file_get_html($url);

$date = date('m/d/Y h:i:s a', time());  echo "<br /><br />after file_get_url $date<br /><br />";

$file_string = file_get_contents($url);
preg_match('/<title>(.*)<\/title>/i', $file_string, $title);
$title_out = $title[1];

foreach($html->find('img') as $e){

    $image = $e->src;

    if (preg_match("/orangeBlue/", $image)) { $image = ''; }

    if (preg_match("/BeaconSprite/", $image)) { $image = ''; }

    if($image != ''){

    if (preg_match("/http/", $image)) { $image = $image; }

    elseif (preg_match("*//*", $image)) { $image = 'http:'.$image; }

    else { $image = $parts['0']."//".$parts[1].$parts[2]."/".$image; }

    $size = getimagesize($image);
    if (($size[0]>110)&&($size[1]>110)){
    if (preg_match("/http/", $image)) { $image = $image; }
    echo '<img src='.$image.'><br>';
    }
    }
    }

$date = date('m/d/Y h:i:s a', time());  echo "<br /><br />end of timing $date<br /><br />";

Example URL

UPDATE

This is actual what timing markers show:

start of timing 01/24/2012 12:31:50 am

after geturl 01/24/2012 12:31:50 am

after file_get_url 01/24/2012 12:31:53 am

end of timing 01/24/2012 12:31:57 am

http://www.ebay.co.uk/itm/Duke-Nukem-Forever-XBOX-360-Game-BRAND-NEW-SEALED-UK-PAL-UK-Seller-/170739972246?pt=UK_PC_Video_Games_Video_Games_JS&hash=item27c0e53896`
解决方案

It's probably the getimagesize function - it is going and fetching every image on the page so it can determine the size. Maybe you can write something with curl to get the header only for Content-size (though, actually, this might be what getimagesize does).

At any rate, back in the day I wrote a few spiders and it's kind of slow to do, with internet speeds better than ever it's still a fetch for each element. And I wasn't even concerned with images.

这篇关于我如何加快速度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-04 22:45