I try to predict age and gender from a face image with age-gender-recognition-retail-0013 -model but I can't seem to understand the documentation correctly.
This is my setup:
ageGenderNet = cv2.dnn.readNet("models/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.xml",
genders = ["female", "male"]
faceBlob = cv2.dnn.blobFromImage(face, 1.0, (227, 227),
(78.4263377603, 87.7689143744, 114.895847746),
aPreds = ageGenderNet.forward()
predict_gender = aPreds
gConfidence = aPreds
gd = genders[int(predict_gender + 0.5)]
predict_age = aPreds * 100
aConfidence = aPreds
The gender is predicted correctly, but the age readings are weird and I think I might be doing something wrong. Looking at a camera for a while I get anything from 1 - 90 for my predicted age. Would appreciate if someone could take a look and tell me if I'm extracting the age prediction and its confidence in a wrong way.
Greetings to you.
First of all, this age-gender-recognition-retail-0013 model, has its own limitation and specification which you may refer to the link below. This model is able to recognize age of people in [18, 75] years old range, it is not applicable for children since their faces were not in the training set. Besides, the input shape for the input image should be in [62, 62].
Based on your coding, I found that your prediction for gender and age are based on the same output layer which means the prediction for the age is actually same as the prediction for the gender. You have to specify the prediction for age and gender to its respective output layer appropriately.
If you have a look in the link attached above, you will find that there are two different output layers mentioned which are “age_conv3” for age and “prob” for gender.
You may refer to this thread that has discussed the same topic before and the solution (coding) is also can be found in the thread.