本文介绍了StreamReader.Close()响应很慢 - 请帮忙的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 29岁程序员,3月因学历无情被辞! 全部, 我有兴趣阅读网页文本并解析它。 搜索这个新组后,我决定使用以下: *********************************开头的代码** ********************** String sTemp =" http://cgi3.igl.net/cgi-bin/ladder /teamsql/team_view.cgi?ladd=teamknights&num=238&showall=1" ;; WebRequest myWebRequest = WebRequest.Create(sTemp); WebResponse myWebResponse = myWebRequest.GetResponse(); 流myStream = myWebResponse.GetResponseStream(); //默认编码为utf-8 StreamReader SR =新的StreamReader(myStream); Char [] buffer = new Char [2048]; //一次读取256个字符。 int count = SR.Read(buffer,0,2000); // while(count> 0) // { //做一些处理 - 可以阅读全部或部分 // count = SR.Read(b uffer,0,2000); //} SR.Close(); //释放资源 myWebResponse.Close(); *********************** ********代码截止日期************************ 此代码应该看起来非常熟悉,因为它遍布 新闻组和微软支持帮助页面。 网页上有一张大表,需要一段时间下载 (即使使用有线调制解调器)。 我观察的是以下内容。如果我打开并读取所有数据 (即 直到count> 0失败,那么踩过SR.Close()执行时间是 立即。如果我只读了2000字节,如上例所示,当 我跨过SR.Close()时需要很长时间(对我来说大约10-15 秒)。这可能是巧合,但似乎需要花费相同的时间,就好像我正在阅读所有数据一样。此时 我开始相信SR.Close()不会中止阅读,直到 整个网页都已收到。这是不可取的。 其实我解析了数据和终止加载的愿望,因为 整个过程是如此缓慢而且并非一直都没有必要。 有谁知道如何终止加载这个页面所以我可以消除延迟吗?我已经用CFC用MFC实现了这个使用 CInternetSession.OpenURL()并且没有这个问题。 提前致谢。 Todd 解决方案 嘿!这是一个傲慢的垃圾邮件发送者:-P " http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi? ladd = teamknights& n um = 238& showall = 1" ;; 我怀疑,因为代码不做它宣传的东西;-) Char []缓冲区= new Char [2048]; //一次读取256个字符。 int count = SR.Read(buffer,0,2000); 为什么一个2 kB的缓冲区,你应该只读取256个字符,但是你为Read()调用指定2000个字符? 嗯,这个特定页面是一个疯狂的6 MB大... Web服务器确实没有帮助客户端,因为没有提供Content-Length标头,只是连接:关闭: HTTP / 1.1 200 OK 日期:2004年4月10日星期六格林尼治标准时间10:20:31服务器:Apache / 1.3.24(Unix)mod_throttle / 3.1.2 PHP / 4.2.0 连接:关闭内容类型:text / html WebClient和WebRequest / WebResponse都无法下载那个野兽。两者都停止下载在完全相同的位置 - 我想底层的TCP流过早关闭。这必须是一些WinInet 默认行为(quirk?),因为当我使用一些使用普通TCP的古老的Visual J ++代码下载页面时,同样的事情发生在我身上。我想我会用System.Net.Sockets编写一些普通的HTTP客户端,看看发生了什么。 (注意:如果Web服务器返回Content-Length标题,下载页面工作正常。) [...] 使用异步I / O - 请参阅WebRequest.Abort() , WebResponse.BeginGetResponse()和WebResponse.EndGetResponse()。 干杯, All,I am interested in reading the text of a web page and parsing it.After searching on this newgroup I decided to use the following:******************************* START OF CODE ************************String sTemp = "http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&num=238&showall=1";WebRequest myWebRequest = WebRequest.Create(sTemp);WebResponse myWebResponse = myWebRequest.GetResponse();Stream myStream = myWebResponse.GetResponseStream();// default encoding is utf-8StreamReader SR = new StreamReader( myStream );Char[] buffer = new Char[2048];// Read 256 charcters at a time.int count = SR.Read( buffer, 0, 2000 );//while (count > 0)//{// do some processing - may read all or part// count = SR.Read(buffer, 0, 2000);//}SR.Close(); // Release the resourcesmyWebResponse.Close();******************************* END OF CODE ************************This code should look very familiar because it is all over thenewsgroup and Microsoft support help pages.The web page has a big table on it and it takes a while to download(even with a cable modem).What I observe is the following. If I open and read all the data(i.e.until count > 0 fails, then stepping over SR.Close() execution time isimmediate. If I read only 2000 bytes as the above example shows, whenI step over SR.Close() it takes a long time (for me around 10-15seconds). This may be a coincidence but it seems to take the sameamount of time as if I was reading all of the data. At this pointI am starting to believe that SR.Close() does not abort reading untilthe entire web page has been recieved. This is not desired and infact I parse the data and desire to terminate loading because theentire process is so slow and not necessary all of the time.Does anyone know how to terminate the loading of the page so I caneliminate the delay? I had implemented this in C++ with MFC usingCInternetSession.OpenURL() and did not have this problem.Thanks in advance.Todd 解决方案Hey! It''s an arrogant spammer :-P "http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&n um=238&showall=1"; I doubt that, as the code doesn''t do what it advertises ;-) Char[] buffer = new Char[2048]; // Read 256 charcters at a time. int count = SR.Read( buffer, 0, 2000 ); Why a 2 kB buffer, when you''re supposedly reading only 256 chars, but you''re specifying 2000 chars for the Read() call? Well, this particular page is an insane 6 MB large... the web server does not help the client either, as there''s no Content-Length header provided, just Connection: close: HTTP/1.1 200 OK Date: Sat, 10 Apr 2004 10:20:31 GMT Server: Apache/1.3.24 (Unix) mod_throttle/3.1.2 PHP/4.2.0 Connection: close Content-Type: text/html Even more interestingly, I cannot even download the entire page at all... neither WebClient nor WebRequest/WebResponse are able to download that beast. Both stop downloading at the exact same position -- I guess the underlying TCP stream is prematurely closed. This must be some WinInet default behaviour (quirk?), as the same thing happens to me when I download the page using some ancient old Visual J++ code that uses plain TCP. I think I''ll write some plain HTTP client using System.Net.Sockets and see what happens. (Note: If the web server returns a Content-Length header, downloading the page works just fine.) [...] Use asynchronous I/O -- see WebRequest.Abort(), WebResponse.BeginGetResponse(), and WebResponse.EndGetResponse(). Cheers, 这篇关于StreamReader.Close()响应很慢 - 请帮忙的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
08-22 17:13