问题描述
我正在尝试使用下面的代码计算时间序列中的示例窗口的自相关。我正在将FFT应用于该窗口,然后计算实部和虚部的大小并将虚部设置为零,最后对其进行逆变换以获得自相关:
I'm trying to calculate autocorrelation of sample windows in a time series using the code below. I'm applying FFT to that window, then computing magnitudes of real and imaginary parts and setting imaginary part to zero, lastly taking inverse transform of it to obtain autocorrelation:
DoubleFFT_1D fft = new DoubleFFT_1D(magCnt);
fft.realForward(magFFT);
magFFT[0] = (magFFT[0] * magFFT[0]);
for (int i = 1; i < (magCnt - (magCnt%2)) / 2; i++) {
magFFT[2*i] = magFFT[2*i] * magFFT[2*i] + magFFT[2*i + 1] * magFFT[2*i + 1];
magFFT[2*i + 1] = 0.0;
}
if (magCnt % 2 == 0) {
magFFT[1] = (magFFT[1] * magFFT[1]);
} else {
magFFT[magCnt/2] = (magFFT[magCnt-1] * magFFT[magCnt-1] + magFFT[1] * magFFT[1]);
}
autocorr = new double[magCnt];
System.arraycopy(magFFT, 0, autocorr, 0, magCnt);
DoubleFFT_1D ifft = new DoubleFFT_1D(magCnt);
ifft.realInverse(autocorr, false);
for (int i = 1; i < autocorr.length; i++)
autocorr[i] /= autocorr[0];
autocorr[0] = 1.0;
第一个问题是:可以看出此代码将自相关结果映射到 [0,1]
范围,虽然相关性应介于-1和1之间。当然,很容易将结果映射到 [ - 1,1]
范围,但我不确定这种映射是否正确。我们如何解释生成的 autocorr
数组中的值?
The first question is: It is seen that this code maps autocorrelation result to [0,1]
range, although correlation is supposed to be between -1 and 1. Of course it is easy to map the results to [-1,1]
range, but I'm not sure if this mapping is correct. How can we interpret the values in the resulting autocorr
array?
其次,使用此代码我会很好一些周期序列的结果,即根据信号周期得到特定自相关指数的更高值。然而,当我将它应用于非周期性信号时结果很奇怪: autocorr
数组中的所有值似乎都非常接近1.这是什么原因?
Secondly, with this code I'm geting good results for some periodic series, that is I get higher values for specific autocorrelation indices in accordance with the period of signal. However the result go weird when I apply it to non-periodic signals: all the values in autocorr
array appear to be very close to 1. What is the reason for that?
推荐答案
要使基于FFT的算法起作用,您必须特别注意定义,包括您正在使用的库的约定。您似乎混淆了AC的信号处理惯例和统计惯例。然后是FFT包装和零填充。
For FFT-based algorithms to work, you must pay careful attention to the definitions, including conventions of the library you are using. You seem to be confusing the "signal processing" convention for AC and the "statistical" one. And then there is FFT wrapping and zero padding.
这是一个适用于偶数N情况信号处理惯例的代码。它是针对强力包裹的自相关进行测试的。评论显示了如何将其转换为信号处理惯例。对于统计ac,减去数据的平均值。这可以仅通过将FFT的0Hz分量归零来完成。然后ac的第零个元素是方差,你可以通过除以这个数量来标准化。正如你所说,结果值将落在-1..1。
Here is a code that's working for the even N case, signal processing convention. It's tested against a brute force wrapped autocorrelation. The comments show how to convert it to the signal processing convention. For statistical ac, the mean of the data is subtracted out. This can be done merely by zeroing out the "0Hz" component of the FFT. Then the zero'th element of the ac is the variance, and you can normalize by dividing through by this quantity. The resulting values will fall in -1..1 as you say.
您的代码似乎正在进行分割,但不会忽略数据的0 Hz分量。所以它正在计算某些约定的mashup。
Your code seems to be doing the dividing through, but not ignoring the 0 Hz component of the data. So it's computing some kind of mashup of the conventions.
import edu.emory.mathcs.jtransforms.fft.DoubleFFT_1D;
import java.util.Arrays;
public class TestFFT {
void print(String msg, double [] x) {
System.out.println(msg);
for (double d : x) System.out.println(d);
}
/**
* This is a "wrapped" signal processing-style autocorrelation.
* For "true" autocorrelation, the data must be zero padded.
*/
public void bruteForceAutoCorrelation(double [] x, double [] ac) {
Arrays.fill(ac, 0);
int n = x.length;
for (int j = 0; j < n; j++) {
for (int i = 0; i < n; i++) {
ac[j] += x[i] * x[(n + i - j) % n];
}
}
}
private double sqr(double x) {
return x * x;
}
public void fftAutoCorrelation(double [] x, double [] ac) {
int n = x.length;
// Assumes n is even.
DoubleFFT_1D fft = new DoubleFFT_1D(n);
fft.realForward(x);
ac[0] = sqr(x[0]);
// ac[0] = 0; // For statistical convention, zero out the mean
ac[1] = sqr(x[1]);
for (int i = 2; i < n; i += 2) {
ac[i] = sqr(x[i]) + sqr(x[i+1]);
ac[i+1] = 0;
}
DoubleFFT_1D ifft = new DoubleFFT_1D(n);
ifft.realInverse(ac, true);
// For statistical convention, normalize by dividing through with variance
//for (int i = 1; i < n; i++)
// ac[i] /= ac[0];
//ac[0] = 1;
}
void test() {
double [] data = { 1, -81, 2, -15, 8, 2, -9, 0};
double [] ac1 = new double [data.length];
double [] ac2 = new double [data.length];
bruteForceAutoCorrelation(data, ac1);
fftAutoCorrelation(data, ac2);
print("bf", ac1);
print("fft", ac2);
double err = 0;
for (int i = 0; i < ac1.length; i++)
err += sqr(ac1[i] - ac2[i]);
System.out.println("err = " + err);
}
public static void main(String[] args) {
new TestFFT().test();
}
}
这篇关于使用JTransforms库计算FFT的自相关的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!