本文介绍了shell脚本或Perl中egrep的替代品更快的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
$ logFile
大小为20GB
$ RAW_FILTER = pattern1 | pattern2 | pattern3 | pattern4 | pattern5 | .... upto Pattern M
$ RAW_FILTER1 = pattern6 | pattern7 | pattern8 | pattern9 | pattern10 | ... upto Pattern N
我的代码如下所示:
cat $ logFile | egrep$ RAW_FILTER>> $ filesNeedToCheck&
cat $ logFile | egrep$ RAW_FILTER1>> $ filesNeedToCheck&
等待
还有其他更快的替代?
如果是日志文件,也许你可以将它分成更小的块和在子流程中运行greps?
$logFile
has size 20GB
$RAW_FILTER=pattern1|pattern2|pattern3|pattern4|pattern5|....upto Pattern M
$RAW_FILTER1=pattern6|pattern7|pattern8|pattern9|pattern10|...upto Pattern N
My code is something like below:
cat $logFile | egrep "$RAW_FILTER" >> $filesNeedToCheck &
cat $logFile | egrep "$RAW_FILTER1" >> $filesNeedToCheck &
wait
Is there any other faster alternative?Any help is highly appreciated.
解决方案
If it's a log file, perhaps you can split it into smaller chunks and run greps in subprocesses?
这篇关于shell脚本或Perl中egrep的替代品更快的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!