本文介绍了如何在不进行预排序(或类似操作)的情况下执行uniq -d的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我知道我可以不进行预排序就删除重复的行,例如:
I am aware that I can remove duplicated lines without presorting, for example:
awk '!x[$0]++' file
但是,我的目标是仅打印重复且仅一次的行.如果不是因为预分类问题
However, my goal is to only print lines which are duplicated and only once. If it were not for the presorting problem
sort | uniq -d
将是完美的.但是订单对我来说非常重要.有没有办法用awk,grep或类似的方法做到这一点?
would be perfect. BUT the order is of great importance to me. Is there a way to do this with awk, grep or something similar?
我正在寻找一种衬纸,如果可能的话,它不需要编写脚本.
I am looking for a one liner which does not require writing a script if possible.
推荐答案
只需检查 x [$ 0]
的值:
awk 'x[$0]++ == 1' file.txt
以上内容将在第二次显示时打印一行.
The above will print a line when it's seen for the second time.
或带前缀 ++
:
awk '++x[$0] == 2' file.txt
这篇关于如何在不进行预排序(或类似操作)的情况下执行uniq -d的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!