I am trying for face recognition. I have to check if two faces are similar by getting the embedding points of the two image files. using the facenet.onnx model. I have added opencv-java410.dll in run configurations I have tried below code .
public class FaceRecognition {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
private static final String MODEL_PATH = "D:\\FaceRecognition\\face-
recognition\\src\\main\\resources\\facenet.onnx";
private static final String HAAR_CASCADE_PATH = "D:\\FaceRecognition\\face-recognition\\src\\main\\resources\\haarcascade_frontalface_default.xml";
public static void main(String[] args) {
System.out.println("----------------------------");
String imagePath1 = "D:\\FaceRecognition\\face-recognition\\src\\main\\resources\\image1.jpg";
String imagePath2 = "D:\\FaceRecognition\\face-recognition\\src\\main\\resources\\image1.jpg"; // Use a different image for testing
Mat img1 = Imgcodecs.imread(imagePath1);
Mat img2 = Imgcodecs.imread(imagePath2);
Mat embeddings1 = extractFaceEmbeddings(img1);
Mat embeddings2 = extractFaceEmbeddings(img2);
if (embeddings1 == null || embeddings2 == null) {
System.out.println("Error: One or both images did not produce embeddings.");
return;
}
double match = cosineSimilarity(embeddings1, embeddings2);
System.out.println("Match Score: " + match);
}
private static Mat extractFaceEmbeddings(Mat img) {
// Step 1: Detect faces
CascadeClassifier faceDetector = new CascadeClassifier(HAAR_CASCADE_PATH);
MatOfRect faces = new MatOfRect();
faceDetector.detectMultiScale(img, faces);
if (faces.toArray().length == 0) return null;
// Step 2: Extract the first detected face
Rect face = faces.toArray()[0];
Mat faceImg = new Mat(img, face);
System.out.println("Face Image Channels (Before Conversion): " + faceImg.channels());
// Step 3: Convert to BGR if the image is grayscale
if (faceImg.channels() == 1) {
Imgproc.cvtColor(faceImg, faceImg, Imgproc.COLOR_GRAY2BGR);
}
System.out.println("Face Image Channels (After Conversion): " + faceImg.channels());
// Step 4: Resize the face image to 160x160
Imgproc.resize(faceImg, faceImg, new Size(160, 160));
// Step 5: Load the ONNX model
Net net = Dnn.readNetFromONNX(MODEL_PATH);
// Step 6: Create the input blob
Mat blob = Dnn.blobFromImage(faceImg, 1.0 / 255.0, new Size(160, 160), new Scalar(127, 127, 127), true, false);
// Step 7: Debugging Blob Info
System.out.println("Blob Shape: " + blob.size());
System.out.println("Blob Channels: " + blob.channels());
System.out.println("Blob Type: " + blob.type());
System.out.println("Blob Dimensions: " + blob.size(0) + "x" + blob.size(1) + "x" + blob.size(2) + "x" + blob.size(3));
// Step 8: Check if the blob has the correct number of channels
if (blob.size(1) != 3) {
System.err.println("❌ Error: Blob has incorrect number of channels.");
return null;
}
Mat trans = new Mat();
Core.transpose(blob, trans);
// Step 9: Set the input blob
net.setInput(trans);
// Step 10: Forward pass to get the embeddings
return net.forward();
}
getting the error
Exception in thread "main" CvException [.opencv.core.CvException:
cv::Exception: OpenCV(4.10.0) C:\GHA-OCV-1\_work\ci-gha-workflow\ci-gha-
workflow\opencv\modules\core\src\matrix_transform.cpp:249: error: (-215:Assertion
failed) _src.dims() <= 2 && esz <= 32 in function 'cv::transpose'
]
at .opencv.core.Core.transpose_0(Native Method)
at .opencv.core.Core.transpose(Core.java:3580)
at com.example.face_recognition.util.FaceRecognition.extractFaceEmbeddings
(FaceRecognition.java:80)
at com.example.face_recognition.util.FaceRecognition.main(FaceRecognition.java:28)
I would like to know , What I am missing? Is there any other solution?. I am very new to face recognition any solution is appreciated.