好的,鉴于您的状态是 [ x_pixels, y_pixels, dx_pixels, dy_pixels ] 这里有一些提示:你的测量矩阵是 I 所以你是说你测量的东西和你所在的州完全一样(这有点不常见:通常你只会测量你所在州的一些子集,例如,您在测量中没有对速度的估计).鉴于你的测量矩阵是I,processNoiseCov和measurementNoiseCov的含义很相似,所以我将它们放在一起讨论.您的 processNoiseCov 应该是一个对角矩阵,其中每个项是这些值在帧之间如何变化的方差(标准偏差的平方).例如,如果您的像素偏移在每帧 100 像素内有 68% 的机会(请参阅正态分布),则位置的对角线条目应为 100 * 100 = 10000(请记住,方差是 stddev 的平方).您需要对速度如何变化进行类似的估计.(高级:应该有共同变化的(非对角线)项,因为速度的变化与位置的变化有关,但你可以不用这个,直到对你有意义为止.我已经在其他答案中解释过).processNoiseCov 中的附加协方差每帧都会添加,因此请记住所表示的更改超过 1/25 秒.您的 measurementNoiseCov 具有相同类型的单位(同样,测量矩阵是单位),但应反映您的测量与不可知的实际真实值相比的准确程度.通常,您可以测量过程和测量值的实际值并计算它们的实际协方差(在 excel、python 或 Matlab 或其他中).您的 errorCovPost 是您的初始不确定性,就像您的每帧加法协方差一样表达,但它描述了您对初始状态正确性的确定程度.在使用 KF 时,获得正确的协方差矩阵是最重要的事情.在设计 KF 时,您总是会花更多的时间来做正确的事情.I try to Stabilize video with a Kalman filter for smoothing . But i have some problemsEach time, i have two frames: one current and another one.Here my workflow:Compute goodFeaturesToTrack()Compute Optical Flow using calcOpticalFlowPyrLK()Keep only good pointsEstimate a rigid transformationSmoothing using Kalman filterWarping of the picture.But i think there is something wrong with Kalman because at the end my video is still not stabilized and it's not smooth at all, it even worse than the original...Here my code of Kalmanvoid StabilizationTestSimple2::init_kalman(double x, double y){ KF.statePre.at<float>(0) = x; KF.statePre.at<float>(1) = y; KF.statePre.at<float>(2) = 0; KF.statePre.at<float>(3) = 0; KF.transitionMatrix = *(Mat_<float>(4,4) << 1,0,1,0, 0,1,0,1, 0,0,1,0, 0,0,0,1); KF.processNoiseCov = *(Mat_<float>(4,4) << 0.2,0,0.2,0, 0,0.2,0,0.2, 0,0,0.3,0, 0,0,0,0.3); setIdentity(KF.measurementMatrix); setIdentity(KF.processNoiseCov,Scalar::all(1e-6)); setIdentity(KF.measurementNoiseCov,Scalar::all(1e-1)); setIdentity(KF.errorCovPost, Scalar::all(.1));}and here how i use it. I put only the interesting part. All this code is inside a flor loop.cornerPrev2 and cornerCurr2 contains all the features points detected just before (with calcOpticalFlowPyrLK()) /// Transformation Mat transformMatrix = estimateRigidTransform(cornersPrev2,cornersCurr2 ,false); // in rare cases no transform is found. We'll just use the last known good transform. if(transformMatrix.data == NULL) { last_transformationmatrix.copyTo(transformMatrix); } transformMatrix.copyTo(last_transformationmatrix); // decompose T double dx = transformMatrix.at<double>(0,2); double dy = transformMatrix.at<double>(1,2); double da = atan2(transformMatrix.at<double>(1,0), transformMatrix.at<double>(0,0)); // Accumulated frame to frame transform x += dx; y += dy; a += da; std::cout << "accumulated x,y: (" << x << "," << y << ")" << endl; if (compteur==0){ init_kalman(x,y); } else { vector<Point2f> smooth_feature_point; Point2f smooth_feature=kalman_predict_correct(x,y); smooth_feature_point.push_back(smooth_feature); std::cout << "smooth x,y: (" << smooth_feature.x << "," << smooth_feature.y << ")" << endl; // target - current double diff_x = smooth_feature.x - x;// double diff_y = smooth_feature.y - y; dx = dx + diff_x; dy = dy + diff_y; transformMatrix.at<double>(0,0) = cos(da); transformMatrix.at<double>(0,1) = -sin(da); transformMatrix.at<double>(1,0) = sin(da); transformMatrix.at<double>(1,1) = cos(da); transformMatrix.at<double>(0,2) = dx; transformMatrix.at<double>(1,2) = dy; warpAffine(currFrame,outImg,transformMatrix,prevFrame.size(), INTER_NEAREST|WARP_INVERSE_MAP, BORDER_CONSTANT); namedWindow("stabilized"); imshow("stabilized",outImg); namedWindow("Original"); imshow("Original",originalFrame); }Can someone have an idea why it's not working ?Thank 解决方案 KF.transitionMatrix = *(Mat_<float>(4,4) << 1,0,1,0, 0,1,0,1, 0,0,1,0, 0,0,0,1);KF.processNoiseCov = *(Mat_<float>(4,4) << 0.2,0,0.2,0, 0,0.2,0,0.2, 0,0,0.3,0, 0,0,0,0.3);setIdentity(KF.measurementMatrix);setIdentity(KF.processNoiseCov,Scalar::all(1e-6));setIdentity(KF.measurementNoiseCov,Scalar::all(1e-1));setIdentity(KF.errorCovPost, Scalar::all(.1));Your processNoiseCov is not symmetric and I doubt you have the right off-diagonal terms. I would stick with a diagonal matrix there until you understand the KF better.On the other hand, you appear to immediately overwrite it with that setIdentity with tiny values, which is probably a mistake. Maybe you added that after having problems with the invalid matrix above?If you describe the framerate and the units of your state (meters? pixels?) we can talk about how to pick good values for processNoiseCov and measurementNoiseCov.EDIT:Ok, given that your state is [ x_pixels, y_pixels, dx_pixels, dy_pixels ] here are some tips:Your measurement matrix is I so you are saying that you are measuring the exact same things as are in your state (this is somewhat uncommon: often you'd only measure some subset of your state, e.g. you'd have no estimate of velocity in your measurements).Given that your measurement matrix is I the meaning of the processNoiseCov and measurementNoiseCov are similar, so I will discuss them together. Your processNoiseCov should be a diagonal matrix where each term is the variance (the square of the standard deviation) of how those values might change between frames. For example, if there is a 68% chance (see normal distribution) that your pixel offsets are within 100 pixels each frame, the diagonal entry for position should be 100 * 100 = 10000 (remember, variance is squared stddev). You need to make a similar estimate for how velocity will change. (Advanced: there should be co-varying (off-diagonal) terms because change in velocity is related to change in position, but you can get by without this until that makes sense to you. I've explained it in other answers).Your additive covariance in processNoiseCov is added every frame so keep in mind the changes represented are over 1/25th second.Your measurementNoiseCov has the same sort of units (again, measurement matrix is identity) but should reflect how accurate your measurements are compared to the unknowable actual true values.Often you can measure actual values for your process and measurements and compute their actual covariance (in excel or python or Matlab or whatever).Your errorCovPost is your initial uncertainty, expressed just like your per-frame additive covariance, but it is a description of how certain you are about your initial state being correct.Getting those covariance matrices right is the most important thing when using a KF. You will always spend more time getting those right than doing anything else when designing a KF. 这篇关于C++/OpenCV - 用于视频稳定的卡尔曼滤波器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
06-15 13:11