本文介绍了是bash& gt;重定向原子?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的crontab工作很奇怪.我的crontab作业执行以下操作:

 程序>文件 

有时候,文件中充满了我无法解释的随机数据.

我想知道这是否可能是先前的crontab作业需要花费更长的时间运行,并且是否以某种方式将其结果与当前crontab作业混合在 file 中?

总的来说,我的问题是:> 操作是原子的吗?表示两个程序是否执行>文件,那么最后一个要完成的数据将保存在文件中?

解决方案

不,它不是原子的.甚至一点点原子.

重定向有两件事:

  1. 它将按名称打开文件 ,并在必要时创建文件.

  2. 它会截断文件.

然后,启动实用程序,并将其 stdin 分配给打开的文件.

如果两个脚本或多或少同时执行此操作,则它们都将最终写入同一文件,但是由于它们将具有独立的文件描述符,因此每个进程将覆盖另一个进程的输出,从而导致大量的交错字节,其中一些来自一个进程,而另一些来自另一个进程.

另一个常见的竞争条件与以下事实有关:在实用程序开始执行之前,文件已被外壳截断.因此,即使该实用程序仅向该文件写入一行,读取该文件的并发实用程序也可能会发现该文件为空.

I've weird problem with my crontab job. My crontab job does the following:

program > file

Sometimes however file gets filled with random data that I can't explain.

I wonder if it could be the previous crontab job that's taking longer to run and it somehow mixes its results in file with current crontab job?

Overall my question is: is > operation atomic? Meaning if two programs do > file, then the last one to finish will have its data in file?

解决方案

No, it's not atomic. Not even a little bit atomic.

The redirection does two things:

  1. It opens the file by name, creating it if necessary.

  2. It truncates the file.

After that, the utility is started, with its stdin assigned to the opened file.

If two scripts do that more or less at the same time, they will both end up writing the same file, but since they will have independent file descriptors, each process will overwrite the other process's output, resulting in a grand interleaving of bytes, some from one process and some from the other.

Another common race condition relates to the fact that the file is truncated (by the shell) before the utility starts executing. Consequently, even if the utility only writes a single line to the file, it is possible that a concurrent utility which reads the file will find that it is empty.

这篇关于是bash& gt;重定向原子?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-26 00:41