我有这样的照片:
opencv - 如何获得轮廓的面积?-LMLPHP

然后将其转换为二进制图像,并使用canny来检测图片的边缘:

gray = cv.cvtColor(image, cv.COLOR_RGB2GRAY)
edge = Image.fromarray(edges)

然后我得到的结果是:
opencv - 如何获得轮廓的面积?-LMLPHP
我想像这样得到2的面积:
opencv - 如何获得轮廓的面积?-LMLPHP

我的解决方案是使用HoughLines在图片中查找线并计算由线形成的三角形的面积。但是,由于封闭区域不是标准三角形,因此这种方法并不精确。如何获得2区的面积?

最佳答案

下面的代码段是使用 floodFill countNonZero 的一种简单方法。我在帮助中对 contourArea 的标准报价:



码:

import cv2
import numpy as np

# Input image
img = cv2.imread('images/YMMEE.jpg', cv2.IMREAD_GRAYSCALE)

# Needed due to JPG artifacts
_, temp = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)

# Dilate to better detect contours
temp = cv2.dilate(temp, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))

# Find largest contour
cnts, _ = cv2.findContours(temp, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE)
largestCnt = []
for cnt in cnts:
    if (len(cnt) > len(largestCnt)):
        largestCnt = cnt

# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])

# Initiale mask for flood filling
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0

# Generate intermediate image, draw largest contour, flood filled
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))

# Count pixels in desired region
area = cv2.countNonZero(temp)

# Put result on original image
img = cv2.putText(img, str(area), (x, y), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, 255)

cv2.imshow('Input', img)
cv2.imshow('Temp image', temp)

cv2.waitKey(0)

临时图片:

opencv - 如何获得轮廓的面积?-LMLPHP

结果图片:

opencv - 如何获得轮廓的面积?-LMLPHP

注意:findContours在右侧非常有问题,该线非常靠近底部图像边框,可能会省略一些像素。

免责声明:我是Python的新手,尤其是OpenCV(胜出的C++)的Python API。非常欢迎提出评论,改进和强调Python的执行!

10-07 19:22