问题描述
我的任务是使用外部实用程序(addr2line)从perl脚本中过滤一些数据.数据量很大.我需要将大量数据打印到程序的stdin
并将大量数据读回(从程序的stdout
到我的脚本中).
My task is to filter some data from perl script with external utility (the addr2line). The data size is quite large. I need to print a lot of data to stdin
of program and read a lot of data back (from stdout
of program into my script).
现在,我使用IPC::Open2
进行此操作,但是我不将阅读和写作混合在一起.这合法吗? Open2
会在管道中缓冲任何大小的数据吗?
Now I do this with IPC::Open2
, but I don't mix reading and writing. Is this legal? Will Open2
buffer any size of data in pipe?
我的代码:
my $cmd="addr2line -e $prog_name ";
use IPC::Open2;
local (*Reader, *Writer);
my $pid = open2(\*Reader, \*Writer, $cmd);
for(@requests) { # this array is HUGE, 100s of thousands of entries
print Writer "$_\n";
}
close Writer;
for(@requests) {
$function_name = <Reader>;
$filesource = <Reader>;
#... store ..
}
close Reader;
waitpid($pid,0);
推荐答案
是的,您将遇到编写程序的缓冲区容量限制.输入缓冲区(Reader
)将填满并阻止外部程序的执行.
Yes, you will run into buffer capacity constraints the way your program is written. Your input buffer (Reader
) will fill up and block execution of your external program.
混合读写将有帮助,因为您将以与外部程序填充它大致相同的速率清空输入缓冲区.
Mixing reading and writing would help, as you would be emptying the input buffer at about the same rate that the external program is filling it.
另一有用的方法是使用文件进行进程间通信,而不是使用管道或套接字(如IPC::Open2
一样).这样,您将仅受可用磁盘空间量的限制.尽管 Forks::Super
默认使用IPC文件,但您可以自己完成此操作.
Another thing that would help is using files for interprocess communication instead of pipes or sockets (as IPC::Open2
does). Then you would be limited only by the amount of free disk space. You could do it yourself, though Forks::Super
uses files for IPC by default.
use Forks::Super 'open2';
...
my ($Reader,$Writer,$pid) = open2(@command);
for (@requests) { print $Writer "$_\n" }
close $Writer;
for (@requests) { ... read ... }
close $Reader;
waitpid $pid,0;
这篇关于如何使用IPC :: Open2过滤大量数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!