问题描述
我正在从另一台服务器下载CSV文件,作为来自供应商的数据提要.
I am downloading a CSV file from another server as a data feed from a vendor.
我正在使用curl获取文件的内容,并将其保存到名为$contents
的变量中.
I am using curl to get the contents of the file and saving that into a variable called $contents
.
我可以很好地做到这一点,但是我尝试通过\r
和\n
爆炸以获取行的数组,但是失败并出现内存不足"错误.
I can get to that part just fine, but I tried exploding by \r
and \n
to get an array of lines but it fails with an 'out of memory' error.
我echo strlen($contents)
大约是3050万个字符.
I echo strlen($contents)
and it's about 30.5 million chars.
我需要操纵这些值并将它们插入数据库中.为了避免内存分配错误,我该怎么办?
I need to manipulate the values and insert them into a database. What do I need to do to avoid memory allocation errors?
推荐答案
PHP令人窒息,因为它耗尽了内存.
PHP is choking because it's running out memory. Instead of having curl populate a PHP variable with the contents of the file, use the
CURLOPT_FILE
将文件保存到磁盘的选项.
option to save the file to disk instead.
//pseudo, untested code to give you the idea
$fp = fopen('path/to/save/file', 'w');
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_exec ($ch);
curl_close ($ch);
fclose($fp);
然后,一旦文件被保存,而不是使用file
或file_get_contents
函数(会将整个文件加载到内存中,再次杀死PHP),请使用fopen
和 fgets 一次读取一行文件.
Then, once the file is saved, instead of using the file
or file_get_contents
functions (which would load the entire file into memory, killing PHP again), use fopen
and fgets to read the file one line at a time.
这篇关于处理长度为3000万个字符的字符串的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!