Using image data, predict the gender and age range of an individual in Python

Neel shah
3 min readOct 30, 2021

--

Introduction

Age and gender prediction are used extensively in the field of computer vision for surveillance. Advancement in computer vision makes this prediction even more practical and open to all. Significant improvements have been made in this research area due to its usefulness in intelligent real-world applications.

Application

A human face contains features that determine the identity, age, gender, emotions, and ethnicity of people. Among these features, age and gender classification can be especially helpful in several real-world applications including security and video surveillance, electronic customer relationship management, biometrics, electronic vending machines, human-computer interaction, entertainment, cosmetology, and forensic art.

Implementation

Typically, you’ll see age detection implemented as a two-stage process:

1. Stage #1: Detect faces from the input image

2. Stage #2: Extract the face Region of Interest (ROI), and apply the age detector algorithm to predict the age of the person

For Stage #1, any face detector capable of producing bounding boxes for faces in an image can be used

The face detector produces the bounding box coordinates of the face in the image.

For Stage #2 — identifying the age of the person.

Given the bounding box (x, y)-coordinates of the face, we first extract the face ROI, ignoring the rest of the image/frame. Doing so allows the age detector to focus solely on the person’s face and not any other irrelevant “noise” in the image.

The face ROI is then passed through the model, yielding the actual age prediction.

Prerequisites

Command for installing OpenCV

pip Install OpenCV

Step-2: Command for installing argparse

pip install argparse

main.py

import cv2
import math
import argparse
def highlightFace(net, frame, conf_threshold=0.7):
frameOpencvDnn = frame.copy()
frameHeight = frameOpencvDnn.shape[0]
frameWidth = frameOpencvDnn.shape[1]
blob = cv2.dnn.blobFromImage(frameOpencvDnn, 1.0, (300, 300), [104, 117, 123], True, False) net.setInput(blob)
detections = net.forward()
faceBoxes = []
for i in range(detections.shape[2]):
confidence = detections[0, 0, i, 2]
if confidence > conf_threshold:
x1 = int(detections[0, 0, i, 3] * frameWidth)
y1 = int(detections[0, 0, i, 4] * frameHeight)
x2 = int(detections[0, 0, i, 5] * frameWidth)
y2 = int(detections[0, 0, i, 6] * frameHeight)
faceBoxes.append([x1, y1, x2, y2])
cv2.rectangle(frameOpencvDnn, (x1, y1), (x2, y2), (0, 255, 0), int(round(frameHeight / 150)), 8)
return frameOpencvDnn, faceBoxes
parser = argparse.ArgumentParser()
parser.add_argument('--image')args = parser.parse_args()faceProto = "opencv_face_detector.pbtxt"
faceModel = "opencv_face_detector_uint8.pb"
ageProto = "age_deploy.prototxt"
ageModel = "age_net.caffemodel"
genderProto = "gender_deploy.prototxt"
genderModel = "gender_net.caffemodel"MODEL_MEAN_VALUES = (78.4263377603, 87.7689143744, 114.895847746)
ageList = ['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(25-32)', '(38-43)', '(48-53)', '(60-100)']
genderList = ['Male', 'Female']faceNet = cv2.dnn.readNet(faceModel, faceProto)
ageNet = cv2.dnn.readNet(ageModel, ageProto)
genderNet = cv2.dnn.readNet(genderModel, genderProto)video = cv2.VideoCapture(args.image if args.image else 0)
padding = 20
while cv2.waitKey(1) < 0:
hasFrame, frame = video.read()
if not hasFrame:
cv2.waitKey()
break resultImg, faceBoxes = highlightFace(faceNet, frame)
if not faceBoxes:
print("No face detected") for faceBox in faceBoxes:
face = frame[max(0, faceBox[1] - padding):
min(faceBox[3] + padding, frame.shape[0] - 1), max(0, faceBox[0] - padding)
:min(faceBox[2] + padding, frame.shape[1] - 1)] blob = cv2.dnn.blobFromImage(face, 1.0, (227, 227), MODEL_MEAN_VALUES, swapRB=False)
genderNet.setInput(blob)
genderPreds = genderNet.forward()
gender = genderList[genderPreds[0].argmax()]
print(f'Gender: {gender}') ageNet.setInput(blob)
agePreds = ageNet.forward()
age = ageList[agePreds[0].argmax()]
print(f'Age: {age[1:-1]} years') cv2.putText(resultImg, f'{gender}, {age}', (faceBox[0], faceBox[1] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
(0, 255, 255), 2, cv2.LINE_AA)
cv2.imshow("Detecting age and gender", resultImg)

Now to test the script

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Neel shah
Neel shah

Written by Neel shah

Information Technology Graduate, CHARUSAT UNIVERSITY,2022

No responses yet