最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - Better alignment techniques for large shifts in lighting with SIFT - Stack Overflow

programmeradmin3浏览0评论

I have a use case where im trying to align a stack of very similar images that have different exposures, some light some dark. This stack aligns to a final target image. The target image has been edited but the homography is pretty much the same, just different lighting.

How can i improve alignment?

Here is what the script i've tried that have produced the best results, however it's still ghosting.

# Alignment parameters
RANSAC_THRESHOLD = 2.5  # Even stricter RANSAC threshold
MATCH_RATIO = 0.5      # Even stricter ratio test
MIN_MATCHES = 15       # More required matches
MASK_HEIGHT_RATIO = 0.4  # Only use bottom 60% of image


def feature_align_sift(final_img, raw_img):
    final_gray = cv2.cvtColor(final_img, cv2.COLOR_RGB2GRAY)
    raw_gray   = cv2.cvtColor(raw_img,   cv2.COLOR_RGB2GRAY)
    
    final_eq = cv2.equalizeHist(final_gray)
    raw_eq   = cv2.equalizeHist(raw_gray)
    
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(final_eq, None)
    kp2, des2 = sift.detectAndCompute(raw_eq,   None)
    
    if des1 is None or des2 is None or len(kp1) < 4 or len(kp2) < 4:
        print("⚠️ Not enough features for alignment.")
        return raw_img, None
    
    # FLANN ratio test
    FLANN_INDEX_KDTREE = 1
    index_params  = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
    search_params = dict(checks=50)
    flann = cv2.FlannBasedMatcher(index_params, search_params)
    matches = flann.knnMatch(des1, des2, k=2)
    good = [m for m, n in matches if m.distance < 0.7 * n.distance]
    
    if len(good) < 4:
        print("⚠️ Not enough good matches for homography.")
        return raw_img, None
    
    src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1,1,2)
    dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1,1,2)
    
    H, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, RANSAC_REPROJ_THRESH)
    if H is None:
        print("⚠️ Homography estimation failed.")
        return raw_img, None
    
    aligned = cv2.warpPerspective(raw_img, H,
                                  (final_img.shape[1], final_img.shape[0]),
                                  flags=cv2.INTER_CUBIC)
    return aligned, H


def compute_valid_region(H, raw_shape, final_shape):    
    h_raw, w_raw = raw_shape[:2]
    corners = np.float32([[0,0],[w_raw,0],[w_raw,h_raw],[0,h_raw]]).reshape(-1,1,2)
    warped = cv2.perspectiveTransform(corners, H)
    x, y, w_box, h_box = cv2.boundingRect(warped)
    
    # clamp
    x = max(x, 0)
    y = max(y, 0)
    w_box = min(w_box, final_shape[1] - x)
    h_box = min(h_box, final_shape[0] - y)
    return x, y, w_box, h_box

When I run this on my images, I expect to get perfect alignment, however the results cause ghosting and misalignment, I’ve included a picture here showing the misaligned image.

I expect to get perfect alignment of the target and final images and I’m a bit unsure where to go from here as I’ve tried orb, sift, and surf with pretty much every variation of settings and interpolation methods.

发布评论

评论列表(0)

  1. 暂无评论