问题描述
我想使用这个例子:
很多因为这个页面上提到的很多原因。
Pretty much for the very reason mentioned on this page.
我正在尝试提供较大的文件(一般为100-200个),并且需要以块形式输出数据,而不是使用curl_exec()读取内存中的所有数据。我的网络主机只允许我64M内存,所以我不能一次读这么多信息。
I'm trying to serve up larger files (100-200megs in general) and need to 'output' the data in chunks instead of it reading it all in memory with curl_exec(). My web host provider only allows me 64 megs of memory so I can't read that much information at once.
这里有任何建议吗?感谢提前。
Any suggestions here? Thanks in advance.
推荐答案
这很简单。所有您需要做的是提供cURL回调以处理数据。
This is pretty easy. All you need to do is provide cURL with a callback to handle data as it comes in.
function onResponseBodyData($ch, $data)
{
echo $data;
return strlen($data);
}
curl_setopt($ch, CURLOPT_WRITEFUNCTION, 'onResponseBodyData');
从回调中返回数据的长度很重要。它表示您处理了多少数据。如果您返回的内容不是传入的数据长度(例如 0
),则请求将被中止。
Returning the length of data from your callback is important. It signifies how much data you processed. If you return something other than the length of data passed in (such as 0
), then the request is aborted.
现在,确保您没有打开输出缓冲区,并在发送之前将服务器配置为不缓冲整个响应。您可以在大多数配置中开箱即用。
Now, make sure you don't have output buffering turned on, and configure your server to not buffer the entire response before sending. It will work out of the box on most configurations.
您可以在这里找到更多示例:
You can find more examples here: http://curl.haxx.se/libcurl/php/examples/callbacks.html
这篇关于使用CURL / PHP从FTP下载,以节省内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!