使用Webclient下载多个文件

使用Webclient下载多个文件

本文介绍了使用Webclient下载多个文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过 NUnit 测试用例从网络服务器下载文件,如下所示:

I'm trying to download files from webserver via a NUnit-testcase like this:

[TestCase("url_to_test_server/456.pdf")]
[TestCase("url_to_test_server/457.pdf")]
[TestCase("url_to_test_server/458.pdf")]
public void Test(string url)
{
    using (WebClient client = new WebClient())
    {
        client.Headers.Add("User-Agent", "Mozilla/4.0 (compatible; MSIE 8.0)");
        client.DownloadFile(new Uri(url), @"C:\Temp\" + Path.GetFileName(url));
    }
}

此代码有效,但当我尝试获取文件大小时,它挂起.

This code works, but when i'm trying to get the file size, it hangs.

[TestCase("url_to_test_server/456.pdf")]
[TestCase("url_to_test_server/457.pdf")]
[TestCase("url_to_test_server/458.pdf")]
public void Test(string url)
{
    using (WebClient client = new WebClient())
    {
        client.Headers.Add("User-Agent", "Mozilla/4.0 (compatible; MSIE 8.0)");
        client.OpenRead(url);
        Int64 bytes_total = Convert.ToInt64(client.ResponseHeaders["Content-Length"]);
        client.DownloadFile(new Uri(url), @"C:\Temp\" + Path.GetFileName(url));
    }
}

如何解决这个问题?

推荐答案

醒来死帖,但这里是答案...

Waking up dead post but here is the answer...

当服务器未在响应标头中提供 Content-Length 时会发生此问题.你必须在服务器端解决这个问题.

This issue happens when server is not providing Content-Length in the response header. You have to fix that on the server side.

发生这种情况的另一个原因是我们达到了与服务器的连接限制.所以我假设您的问题是相似的,并且它在循环中的第二次或第三次尝试中挂起.

Another reason when it happens is when we have reached the connection limit to the server.So I am assuming that your issue was similar and it was hanging on second or third try in a loop.

当我们调用 OpenRead 时,它打开了一个流.我们只需要在获取文件大小后关闭此流即可使其正常工作.

When we call OpenRead, it opens up a stream. We just need to close this stream after getting our file size to make it work properly.

这是我用来获取大小的代码:

Here is the code I use to get the size:

    /// <summary>
    /// Gets file size from a url using WebClient and Stream classes
    /// </summary>
    /// <param name="address">url</param>
    /// <param name="useHeaderOnly">requests only headers instead of full file</param>
    /// <returns>File size or -1 if their is an issue.</returns>
    static Int64 GetFileSize(string address, bool useHeaderOnly = false)
    {
        Int64 retVal = 0;
        try
        {
            if(useHeaderOnly)
            {
                WebRequest request = WebRequest.Create(address);
                request.Method = "HEAD";

                // WebResponse also has to be closed otherwise we get the same issue of hanging on the connection limit. Using statement closes it automatically.
                using (WebResponse response = request.GetResponse())
                {
                    if (response != null)
                    {
                        retVal = response.ContentLength;
                        //retVal =  Convert.ToInt64(response.Headers["Content-Length"]);
                    }
                }
                request = null;
            }
            else
            {
                using (WebClient client = new WebClient())
                {
                    // Stream has to be closed otherwise we get the issue of hanging on the connection limit. Using statement closes it automatically.
                    using (Stream response = client.OpenRead(address))
                    {
                        retVal = Convert.ToInt64(client.ResponseHeaders["Content-Length"]);
                    }
                }
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
            retVal = -1;
        }


        return retVal;
    }

这篇关于使用Webclient下载多个文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 12:02
查看更多