本文介绍了旋转图像后如何重新映射点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有一个数学问题:假设我使用opencv和以下命令将图像绕其中心旋转30°:
I have a mathematical question: let's suppose I rotate an image around its center by an angle of 30°, using the opencv with the following commands:
M = cv2.getRotationMatrix2D((cols/2,rows/2),30,1)
img_rotate = cv2.warpAffine(img,M,(cols,rows))
如果拍摄img_rotate的像素(40,40),我怎么知道原始图像中的对应像素是什么?
If a take the pixel (40,40) of the img_rotate, how can I know which is the corresponding pixel in the original image?
换句话说,当我将旋转应用于图像时,便获得了转换后的图像.是否有可能获得点之间的映射?例如,新图像的(x,y)点对应于原始图像的(x',y')点.
in other words, when I apply the rotation to an image I obtain the transformed image. Is there the possibility to obtain the mapping between points? For example the (x,y) point of the new image corresponds to (x',y') point of the original image.
推荐答案
只需使用仿射变换和逆矩阵.
# inverse matrix of simple rotation is reversed rotation.
M_inv = cv2.getRotationMatrix2D((100/2, 300/2),-30,1)
# points
points = np.array([[35., 0.],
[175., 0.],
[105., 200.],
[105., 215.],
])
# add ones
ones = np.ones(shape=(len(points), 1))
points_ones = np.hstack([points, ones])
# transform points
transformed_points = M_inv.dot(points_ones.T).T
这篇关于旋转图像后如何重新映射点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!