最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - Detecting and Extracting Aruco Marker with White Border Issues - Stack Overflow

programmeradmin4浏览0评论

I am working on a project to detect and extract an Aruco marker from an image using OpenCV in Python. The process involves:

Auto Color Correction: Applying CLAHE to improve contrast. White Border Extraction: Using HSV thresholding to detect the marker’s white outline. Contour Detection: Finding the best quadrilateral contour to isolate the marker. Perspective Transformation: Warping the detected marker to a standard view.

Issue:

  1. The detection works on some images but fails on others where the white border is not well-extracted.
  2. In some cases, the marker is detected but the perspective transform results in a distorted output.
  3. The contour detection sometimes picks unwanted shapes instead of the marker.
import cv2
import numpy as np

def auto_color_correction(image):
    """Apply histogram equalization on the L channel to correct colors."""
    lab = cv2.cvtColor(image, cv2.COLOR_BGR2LAB)
    l, a, b = cv2.split(lab)
    clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8, 8))
    l = clahe.apply(l)
    corrected_lab = cv2.merge([l, a, b])
    return cv2.cvtColor(corrected_lab, cv2.COLOR_LAB2BGR)

def extract_aruco_white_border(image):
    """Extract white outline of the Aruco marker."""
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    lower_white = np.array([0, 0, 180], dtype=np.uint8)
    upper_white = np.array([180, 50, 255], dtype=np.uint8)
    mask = cv2.inRange(hsv, lower_white, upper_white)
    kernel = np.ones((3, 3), np.uint8)
    mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=2)
    mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=2)
    return mask

def find_aruco_contour(mask, original_image):
    """Finds the Aruco marker contour and extracts its area."""
    contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    best_contour = None
    max_area = 0
    for cnt in contours:
        area = cv2.contourArea(cnt)
        if area > 1000:  
            peri = cv2.arcLength(cnt, True)
            approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)
            if len(approx) == 4 and area > max_area:
                max_area = area
                best_contour = approx
    if best_contour is not None:
        return extract_marker_region(original_image, best_contour)
    return original_image  

def extract_marker_region(image, contour):
    """Performs perspective transform to isolate the Aruco marker."""
    rect = np.array([contour[i][0] for i in range(4)], dtype="float32")
    rect = order_points(rect)
    width = max(np.linalg.norm(rect[0] - rect[1]), np.linalg.norm(rect[2] - rect[3]))
    height = max(np.linalg.norm(rect[0] - rect[3]), np.linalg.norm(rect[1] - rect[2]))
    dst = np.array([[0, 0], [width - 1, 0], [width - 1, height - 1], [0, height - 1]], dtype="float32")
    M = cv2.getPerspectiveTransform(rect, dst)
    return cv2.warpPerspective(image, M, (int(width), int(height)))

def order_points(pts):
    """Orders contour points to ensure correct perspective transformation."""
    s = pts.sum(axis=1)
    diff = np.diff(pts, axis=1)
    ordered = np.zeros((4, 2), dtype="float32")
    ordered[0] = pts[np.argmin(s)]  
    ordered[1] = pts[np.argmin(diff)]  
    ordered[2] = pts[np.argmax(s)]  
    ordered[3] = pts[np.argmax(diff)]  
    return ordered

image = cv2.imread("All Marker.jpg")
if image is None:
    print("Error: Could not load image.")
    exit(1)

corrected = auto_color_correction(image)  
white_mask = extract_aruco_white_border(corrected)  
marker_region = find_aruco_contour(white_mask, corrected.copy())  

cv2.imshow("Detected Aruco", marker_region)
cv2.waitKey(0)
cv2.destroyAllWindows()

What I Tried: Adjusting the HSV threshold range for white. Modifying the kernel size for morphological operations. Increasing/decreasing the contour area threshold.

What I Need Help With: How can I make the white border extraction more robust? How can I ensure the perspective transform correctly aligns the marker? Are there better preprocessing steps to improve detection?

Thanks

发布评论

评论列表(0)

  1. 暂无评论