I’m developing a Python script to rank images based on "blueness" for a colorblind accessibility project (I'm colorblind myself). The goal is to prioritize images with large blue areas over those with tiny intense blue spots. Despite adjustments, non-blue images (e.g., grays/whites) are ranking highly due to residual blue channel values.
Problem
The script calculates a combined score (coverage + intensity) but incorrectly ranks images like image20.png (0% coverage, 25% intensity) above truly blue images.
image20.png
Key Requirements
- Coverage: Percentage of pixels where blue is dominant and sufficiently bright.
- Intensity: Average blue value only for qualifying pixels.
- Score: 0.9 * coverage + 0.1 * intensity to prioritize coverage.
Current Code
from PIL import Image
import numpy as np
def average_blue_intensity(image_path):
img = Image.open(image_path).convert('RGB')
img_array = np.array(img).astype(np.int16)
red, green, blue = img_array[:, :, 0], img_array[:, :, 1], img_array[:, :, 2]
mask = (blue > red + 40) & (blue > green + 40) & (blue > 100)
return np.mean(blue[mask]) / 255 if np.any(mask) else 0
def blue_coverage(image_path):
# ... (same mask as above)
return np.mean(mask)
# Combined score calculation
combined_scores = {path: 0.9 * coverage + 0.1 * intensity ...}
What I’ve Tried
Stricter Thresholds: Blue must exceed red/green by 40 units and be >100 in brightness.
Debugging Pixels: Confirmed image20.png has no true blue pixels (e.g., grays like (100,100,100)).
Visualizing Masks: Generated debug images to verify mask accuracy.
Unexpected Behavior
image20.png (no blue) scores higher than image127.png (5% coverage).
Full output:
1. image20.png - Score: 0.0999 (Coverage: 0.00%, Intensity: 24.97%) 2. image127.png - Score: 0.0964 (Coverage: 5.23%, Intensity: 16.26%)
Question
How can I refine the algorithm to exclude non-blue pixels (e.g., grays/whites) and ensure the score truly reflects blue dominance?
I’m developing a Python script to rank images based on "blueness" for a colorblind accessibility project (I'm colorblind myself). The goal is to prioritize images with large blue areas over those with tiny intense blue spots. Despite adjustments, non-blue images (e.g., grays/whites) are ranking highly due to residual blue channel values.
Problem
The script calculates a combined score (coverage + intensity) but incorrectly ranks images like image20.png (0% coverage, 25% intensity) above truly blue images.
image20.png
Key Requirements
- Coverage: Percentage of pixels where blue is dominant and sufficiently bright.
- Intensity: Average blue value only for qualifying pixels.
- Score: 0.9 * coverage + 0.1 * intensity to prioritize coverage.
Current Code
from PIL import Image
import numpy as np
def average_blue_intensity(image_path):
img = Image.open(image_path).convert('RGB')
img_array = np.array(img).astype(np.int16)
red, green, blue = img_array[:, :, 0], img_array[:, :, 1], img_array[:, :, 2]
mask = (blue > red + 40) & (blue > green + 40) & (blue > 100)
return np.mean(blue[mask]) / 255 if np.any(mask) else 0
def blue_coverage(image_path):
# ... (same mask as above)
return np.mean(mask)
# Combined score calculation
combined_scores = {path: 0.9 * coverage + 0.1 * intensity ...}
What I’ve Tried
Stricter Thresholds: Blue must exceed red/green by 40 units and be >100 in brightness.
Debugging Pixels: Confirmed image20.png has no true blue pixels (e.g., grays like (100,100,100)).
Visualizing Masks: Generated debug images to verify mask accuracy.
Unexpected Behavior
image20.png (no blue) scores higher than image127.png (5% coverage).
Full output:
1. image20.png - Score: 0.0999 (Coverage: 0.00%, Intensity: 24.97%) 2. image127.png - Score: 0.0964 (Coverage: 5.23%, Intensity: 16.26%)
Question
How can I refine the algorithm to exclude non-blue pixels (e.g., grays/whites) and ensure the score truly reflects blue dominance?
Share Improve this question asked Feb 1 at 18:35 DilubarDilubar 134 bronze badges1 Answer
Reset to default 0You are evaluating an image the way the camera sees it, in RGB space.
Consider switching to something closer to human perception, such as HSV space.
A pypi library can help with that.
$ pip install nice-colorsys
$
$ python -c 'from nice_colorsys import hsv; print("\n", hsv(0.5, 1, 1))'
hsv(hue=0.5, saturation=1, value=1)
Additionally you might binarize with Otsu, paying attention just to the bluishness of pixels rather than the grey-scale intensity of pixels. The Open CV library has a good Otsu implementation. And then you can evaluate your masked image, or perhaps simply examine the resulting mask.