CN110785769A - Face gender identification method, and training method and device of face gender classifier - Google Patents

Face gender identification method, and training method and device of face gender classifier Download PDF

Info

Publication number
CN110785769A
CN110785769A CN201980001859.5A CN201980001859A CN110785769A CN 110785769 A CN110785769 A CN 110785769A CN 201980001859 A CN201980001859 A CN 201980001859A CN 110785769 A CN110785769 A CN 110785769A
Authority
CN
China
Prior art keywords
face
image
gray level
gender
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980001859.5A
Other languages
Chinese (zh)
Inventor
张欢欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Publication of CN110785769A publication Critical patent/CN110785769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides a face gender identification method, a training method and a device of a face gender classifier, wherein the face gender identification method comprises the following steps: carrying out face detection on an image to be recognized to obtain a target face in the image to be recognized; extracting the human face characteristic points of the target human face; obtaining a human face region-of-interest image according to the human face characteristic points; carrying out feature extraction on the face region-of-interest image to obtain a feature image; performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression; obtaining gender characteristics of the target face according to the gray level histogram; and inputting the gender characteristics of the target face into a trained face gender classifier to obtain a gender identification result of the target face.

Description

Face gender identification method, and training method and device of face gender classifier
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a face gender identification method, a training method for a face gender classifier, and an apparatus for the face gender classifier.
Background
Human faces are very important biological features of human beings. The face-based recognition function is becoming a focus of research in recent years. The face gender identification has wide development prospect and application value in the aspects of pattern identification, artificial intelligence, computer vision, information safety and the like.
Because the difference of facial features of men and women is very slight, and the influence of factors such as human expressions, ages and illumination makes it more difficult to identify gender through human faces, and it is difficult to achieve ideal accuracy.
Disclosure of Invention
The disclosure provides a face gender identification method, a face gender classifier training method and a face gender classifier training device.
In a first aspect, the present disclosure provides a face gender identification method, including:
carrying out face detection on an image to be recognized to acquire a target face in the image to be recognized;
extracting the characteristic points of the human face of the target human face;
obtaining a face interesting region image according to the face characteristic points;
extracting the features of the face region-of-interest image to obtain a feature image;
performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
obtaining gender characteristics of the target face according to the gray level histogram;
and inputting the gender characteristics of the target face into a trained face gender classifier to obtain a gender identification result of the target face.
Optionally, after extracting the face feature points of the target face, before obtaining a face region-of-interest image according to the face feature points, the method further includes:
and geometrically correcting the target face according to the face characteristic points to obtain a corrected image, wherein in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, the performing feature extraction on the face region-of-interest image to obtain a feature image further includes:
and scaling the human face region-of-interest image to a specified size.
Optionally, the performing feature extraction on the face region-of-interest image to obtain a feature image includes: performing feature extraction on the face region-of-interest image by adopting a symbiotic local binary pattern feature extraction algorithm to obtain a plurality of feature images;
the obtaining of the gender feature of the target face according to the gray level histogram comprises: and connecting the gray level histograms of the plurality of characteristic images to obtain the gender characteristics of the target face.
Optionally, the performing gray level compression processing on the feature image and obtaining a gray level histogram of the image after gray level compression includes:
dividing the characteristic image into n multiplied by m image blocks with the same size, wherein n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
for at least one image block, compressing the gray level of the image block to a first numerical value to obtain an image after gray level compression, wherein the first numerical value is smaller than the gray level of the characteristic image;
acquiring a gray level histogram of the image after gray level compression;
and connecting the gray level histograms of the n x m image blocks to obtain the gray level histogram of the characteristic image.
In a second aspect, the present disclosure provides a training method for a face gender classifier, including:
acquiring a plurality of face image samples, wherein the face image samples comprise a plurality of images with male faces and a plurality of images with female faces;
carrying out face detection on the face image sample, and determining the position of a target face in the face image sample;
extracting the characteristic points of the human face of the target human face;
obtaining a face interesting region image according to the face characteristic points;
extracting the features of the face region-of-interest image to obtain a feature image;
performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
obtaining gender characteristics of the target face according to the gray level histogram;
and training a face gender classifier by adopting the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
Optionally, after extracting the face feature points of the target face, before obtaining a face region-of-interest image according to the face feature points, the method further includes:
and geometrically correcting the target face according to the face characteristic points to obtain a corrected image, wherein in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, the performing feature extraction on the face region-of-interest image to obtain a feature image further includes:
and scaling the human face region-of-interest image to a specified size.
Optionally, the performing feature extraction on the face region-of-interest image to obtain a feature image includes: performing feature extraction on the face region-of-interest image by using a CoLBP feature extraction algorithm to obtain a plurality of feature images;
the obtaining of the gender feature of the target face according to the gray level histogram comprises: and connecting the gray level histograms of the plurality of characteristic images to obtain the gender characteristics of the target face.
Optionally, the performing gray level compression processing on the feature image and obtaining a gray level histogram of the image after gray level compression includes:
dividing the characteristic image into n multiplied by m image blocks with the same size, wherein n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
for at least one image block, compressing the gray level of the image block to a first numerical value to obtain an image after gray level compression, wherein the first numerical value is smaller than the gray level of the characteristic image;
acquiring a gray level histogram of the image after gray level compression;
and connecting the gray level histograms of the n x m image blocks to obtain the gray level histogram of the characteristic image.
In a third aspect, the present disclosure provides a face gender identification device, including:
the face detector is used for carrying out face detection on an image to be recognized and acquiring a target face in the image to be recognized;
a face feature point extractor for extracting face feature points from the target face;
the face interesting region acquirer is used for acquiring a face interesting region image according to the face characteristic points;
the image feature extractor is used for extracting features of the face region-of-interest image to obtain a feature image;
the gray level compression processor is used for carrying out gray level compression processing on the characteristic image and acquiring a gray level histogram of the image after gray level compression;
a gender characteristic obtainer for obtaining gender characteristics of the target face according to the gray level histogram;
and the gender recognizer is used for inputting the gender characteristics of the target face into the trained face gender classifier to obtain the gender recognition result of the target face.
In a fourth aspect, the present disclosure provides a training device for a face gender classifier, comprising:
the face image sample acquirer is used for acquiring a plurality of face image samples, wherein the face image samples comprise a plurality of images with male faces and a plurality of images with female faces;
a face feature point extractor for extracting face feature points from the target face;
the face interesting region acquirer is used for acquiring a face interesting region image according to the face characteristic points;
the image feature extractor is used for extracting features of the face region-of-interest image to obtain a feature image;
the gray level compression processor is used for carrying out gray level compression processing on the characteristic image and acquiring a gray level histogram of the image after gray level compression;
a gender characteristic obtainer for obtaining gender characteristics of the target face according to the gray level histogram;
and the trainer is used for training the face gender classifier by adopting the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
In a fifth aspect, the present disclosure provides a face gender recognition apparatus, including a processor, a memory, and a computer program stored on the memory and operable on the processor, wherein the computer program, when executed by the processor, implements the steps of the face gender recognition method.
In a sixth aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, implementing the steps of the above-mentioned face gender identification method; or, the computer program is executed by a processor to implement the steps of the training method of the face gender classifier.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the disclosure. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart of a face gender identification method according to some embodiments of the present disclosure;
fig. 2 is a schematic flowchart of acquiring a gray histogram of an image after gray level compression according to some embodiments of the present disclosure;
fig. 3 is a schematic flowchart of a training method of a face gender classifier according to some embodiments of the present disclosure;
fig. 4 is a schematic flowchart of acquiring a gray histogram of an image after gray level compression according to some other embodiments of the present disclosure;
fig. 5 is a schematic structural diagram of a face gender recognition apparatus according to some embodiments of the present disclosure;
fig. 6 is a schematic diagram of a structure of a gray scale compression processor according to some embodiments of the present disclosure;
fig. 7 is a schematic structural diagram of a training apparatus for a face gender classifier according to some embodiments of the present disclosure;
fig. 8 is a schematic diagram of a structure of a gray scale compression processor according to some other embodiments of the present disclosure;
fig. 9 is another schematic structural diagram of a face gender recognition apparatus according to some embodiments of the present disclosure;
fig. 10 is a schematic structural diagram of another training apparatus for a face gender classifier according to some embodiments of the present disclosure.
Detailed Description
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is obvious that the described embodiments are some, not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Referring to fig. 1, fig. 1 is a schematic flow chart of a face gender identification method according to some embodiments of the present disclosure, the method including:
step 11: carrying out face detection on an image to be recognized to acquire a target face in the image to be recognized;
face detection is the search for faces from images and the determination of the position and size of the face.
In some embodiments, a Dlib library may be employed for face detection. Dlib is a toolbox containing machine learning algorithms and tools for creating complex software to solve practical problems. The Dlib library includes tools for object detection in images, including frontal face detection and object pose estimation, among others. Of course, other algorithms may be used for face detection in the present disclosure.
Step 12: extracting the characteristic points of the human face of the target human face;
in some embodiments, a Dlib library may be used to extract 68 the personal face feature points. Of course, if the present disclosure uses other algorithms to perform face detection, the number of face feature points extracted in the present disclosure may be other numbers, which is not limited herein.
Step 13: obtaining a human face region of interest (ROI) image according to the human face characteristic points;
step 14: extracting the features of the face region-of-interest image to obtain a feature image;
step 15: performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
the gray level of the feature image before compression is 256, namely the range of the gray level values is 0-255, the feature image is subjected to gray level compression processing, and the gray level of the compressed image is less than 256, for example, 16, namely the range of the gray level values of the image is 0-15. The gray scale levels in this disclosure may be in other ranges and are not limited herein.
Step 16: obtaining gender characteristics of the target face according to the gray level histogram;
and step 17: and inputting the gender characteristics of the target face into a trained face gender classifier to obtain a gender identification result of the target face.
In the method, the gray level compression processing is carried out on the characteristic image, so that the dimension of the characteristics in the characteristic image is reduced, the influence of factors such as age, expression and illumination of the target face on gender identification can be reduced, the identification accuracy is improved, and the face gender identification method is not only suitable for a laboratory scene, but also can be used in scenes such as a natural environment. Meanwhile, because the dimension of the features in the feature image is reduced, the recognition speed can be effectively improved, and the method is suitable for devices with higher requirements on the recognition speed, such as embedded devices without a GPU (graphic processing unit).
In some embodiments, optionally, after the extracting the face feature points of the target face in step 12, before obtaining the face-of-interest region image according to the face feature points in step 13, the method further includes: and preprocessing the image to be recognized.
In some embodiments, the pre-processing may include: and filtering the image to be identified. Image filtering, namely, suppressing the noise of an image under the condition of keeping the detail features of the image as much as possible, wherein the effectiveness and reliability of subsequent image processing and analysis are directly influenced by the quality of the processing effect. In some embodiments, a gaussian filter may be used to filter the image to be recognized to remove image noise.
In some embodiments, the pre-processing may include: and geometrically correcting the target face according to the face characteristic points to obtain a corrected image, wherein in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, an affine transformation matrix may be determined according to the face feature points, and geometric correction of the target face may be performed according to the affine transformation matrix.
The target face detected from the image to be recognized may have a problem of inclination, so that geometric correction needs to be performed on the target face, so that a connection line between two eyes in the corrected target face is a horizontal line, and a subsequent feature extraction process can be optimized.
After the geometric correction is performed, the coordinates of the human face feature points in the corrected image are also acquired to be used for determining the human face interesting region. Optionally, the highest point, the lowest point, the leftmost point and the rightmost point in the coordinates of the feature points of the face may be obtained, an external matrix of the face is determined, and a screenshot is performed on an external matrix region to obtain an image of the region of interest of the face.
In some embodiments, optionally, the performing feature extraction on the face region-of-interest image in the step 14 to obtain a feature image further includes: and scaling the human face region-of-interest image to a specified size. Namely, the face interesting region image is subjected to down-sampling, so that the face interesting region image is in a fixed size, the influence of factors such as image resolution, unfixed distance between a camera and a face on face gender identification is reduced, and the robustness of the identification method is improved.
In some embodiments, optionally, the performing feature extraction on the face region-of-interest image in the step 14 to obtain a feature image includes: and performing feature extraction on the face region-of-interest image by using a CoLBP (symbiotic local binary pattern) feature extraction algorithm to obtain a plurality of feature images. For example, 8 feature images are obtained. The number of feature images obtained in the present disclosure may be other numbers, and is not limited herein.
In this case, the obtaining of the gender feature of the target face according to the gray histogram in step 16 includes: and connecting the gray level histograms of the plurality of characteristic images to obtain the gender characteristics of the target face.
Compared with the existing CNN (convolutional neural network) feature extraction algorithm, the CoLBP feature extraction algorithm has the advantages that the calculation complexity is greatly reduced, so that the identification speed can be effectively improved, and the method is suitable for equipment with high requirements on the identification speed, such as embedded equipment without a GPU (graphics processing unit).
The cobbp feature extraction algorithm may specifically be: firstly, extracting edge response characteristic images of a human face region-of-interest image in multiple directions by using multiple directional filters; then, LBP (local binary pattern) features are obtained for the edge response feature images in the multiple directions respectively, so as to obtain multiple feature images. Extracting edge response characteristic images of the face region-of-interest image in multiple directions by using 8 directional filters; then, the LBP features are respectively obtained for the edge response feature images in the 8 directions, so as to obtain a plurality of feature images. The 8 directions may include two directions on a horizontal center line, two directions on a vertical center line, and four directions on diagonal lines of the image, respectively. The use of 8 directional filters can provide finer image edge response characteristics, although this disclosure does not exclude the use of 4 directional filters, or more than 8 directional filters.
In some embodiments, optionally referring to fig. 2, performing a gray level compression process on the feature image in step 15, and obtaining a gray level histogram of the image after the gray level compression includes:
step 151: dividing the characteristic image into n multiplied by m image blocks with the same size, wherein n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2; for example, n is 7, and m may have the same or different value.
Step 152: for at least one image block, compressing the gray level of the image block to a first numerical value to obtain an image after gray level compression, wherein the first numerical value is smaller than the gray level of the characteristic image;
the gray level of the characteristic image is, for example, 256, and the first value is, for example, 16.
Step 153: acquiring a gray level histogram of the image after gray level compression;
optionally, the grayscale histogram is a normalized histogram.
Step 154: and connecting the gray level histograms of the n x m image blocks to obtain the gray level histogram of the characteristic image.
Optionally, the gray level histograms of the image blocks of one feature image may be connected from left to right and from top to bottom to obtain the gray level histogram of the feature image.
If the number of the feature images is multiple, for example, 8, the grayscale histograms of the 8 feature images are connected to obtain the gender feature of the target face.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a training method of a face gender classifier according to some embodiments of the present disclosure, the method including:
step 21: acquiring a plurality of face image samples, wherein the face image samples comprise a plurality of images with male faces and a plurality of images with female faces;
alternatively, the number of images having a face of a male person and the number of images having a face of a female person may be the same or substantially the same.
Optionally, the image with the male face and the image with the female face include images of people of different ages, so that the obtained face gender classifier can identify people of all ages.
Step 22: carrying out face detection on the face image sample, and determining the position of a target face in the face image sample;
face detection is the search for faces from images and the determination of the position and size of the face.
In some embodiments, a Dlib library may be employed for face detection.
Step 23: extracting the characteristic points of the human face of the target human face;
in some embodiments, a Dlib library may be used to extract 68 the personal face feature points. The number of the face feature points extracted in the present disclosure may be other numbers, and is not limited herein.
Step 24: obtaining a face interesting region image according to the face characteristic points;
step 25: extracting the features of the face region-of-interest image to obtain a feature image;
step 26: performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
the gray level of the feature image before compression is 256, namely the range of the gray level values is 0-255, the feature image is subjected to gray level compression processing, and the gray level of the compressed image is less than 256, for example, 16, namely the range of the gray level values of the image is 0-15. The gray scale levels in this disclosure may be in other ranges and are not limited herein.
Step 27: obtaining gender characteristics of the target face according to the gray level histogram;
step 28: and training a face gender classifier by adopting the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
In the method, the characteristic image is subjected to gray level compression processing, so that the dimension of the characteristics in the characteristic image is reduced, the influence of factors such as age, expression and illumination of a target face on gender identification can be reduced, the identification accuracy is improved, and the face gender classifier is not only suitable for a laboratory scene, but also can be used in scenes such as a natural environment.
In some embodiments, optionally, after the extracting the face feature points of the target face in step 23, before obtaining the face-of-interest region image according to the face feature points in step 24, the method further includes: and preprocessing the face image sample.
In some embodiments, the pre-processing may include: and carrying out filtering processing on the face image sample. Image filtering, namely, suppressing the noise of an image under the condition of keeping the detail features of the image as much as possible, wherein the effectiveness and reliability of subsequent image processing and analysis are directly influenced by the quality of the processing effect. In some embodiments, a gaussian filter may be used to filter the face image samples to remove image noise.
In some embodiments, the pre-processing may include: and geometrically correcting the target face according to the face characteristic points to obtain a corrected image, wherein in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, an affine transformation matrix may be determined according to the face feature points, and geometric correction of the target face may be performed according to the affine transformation matrix.
The target face detected from the face image sample may have a problem of inclination, so that geometric correction needs to be performed on the target face, so that a connection line between two eyes in the corrected target face is a horizontal line, and a subsequent feature extraction process can be optimized.
After the geometric correction is performed, the coordinates of the human face feature points in the corrected image are also acquired to be used for determining the human face interesting region. Optionally, the highest point, the lowest point, the leftmost point and the rightmost point in the coordinates of the feature points of the face may be obtained, an external matrix of the face is determined, and a screenshot is performed on an external matrix region to obtain an image of the region of interest of the face.
In some embodiments, optionally, the performing feature extraction on the face region-of-interest image in the step 25 to obtain a feature image further includes: and scaling the human face region-of-interest image to a specified size. The face region-of-interest image is subjected to down-sampling, so that the face region-of-interest image is fixed in size, the influence of factors such as image resolution and unfixed distance between a camera and a face on face gender identification is reduced, and the robustness of the face gender classifier is improved.
In some embodiments, optionally, the performing feature extraction on the face region-of-interest image in the step 25 to obtain a feature image includes: and performing feature extraction on the face region-of-interest image by adopting a CoLBP feature extraction algorithm to obtain a plurality of feature images. For example, 8 feature images are obtained. The number of feature images obtained in the present disclosure may be other numbers, and is not limited herein.
In this case, the obtaining the gender feature of the target face according to the gray histogram in step 27 includes: and connecting the gray level histograms of the plurality of characteristic images to obtain the gender characteristics of the target face.
Compared with the existing CNN feature extraction algorithm, the CoLBP feature extraction algorithm has the advantages that the calculation complexity is greatly reduced, so that the identification speed can be effectively increased, and the method is suitable for equipment with higher requirements on the identification speed, such as embedded equipment without a GPU (graphics processing Unit).
The cobbp feature extraction algorithm may specifically be: firstly, extracting edge response characteristic images of a human face region-of-interest image in multiple directions by using multiple directional filters; and then, the LBP characteristic is respectively obtained from the edge response characteristic images in the multiple directions, so that multiple characteristic images are obtained.
In some embodiments, optionally referring to fig. 4, performing a gray level compression process on the feature image in step 26, and obtaining a gray level histogram of the image after the gray level compression process includes:
step 261: dividing the characteristic image into n multiplied by m image blocks with the same size, wherein n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2; for example, n is 7, and m may be the same as or different from n.
Step 262: for at least one image block, compressing the gray level of the image block to a first numerical value to obtain an image after gray level compression, wherein the first numerical value is smaller than the gray level of the characteristic image;
the gray level of the characteristic image is, for example, 256, and the first value is, for example, 16.
Step 263: acquiring a gray level histogram of the image after gray level compression;
optionally, the grayscale histogram is a normalized histogram.
Step 264: and connecting the gray level histograms of the n x m image blocks to obtain the gray level histogram of the characteristic image.
Optionally, the gray level histograms of the image blocks of one feature image may be connected from left to right and from top to bottom to obtain the gray level histogram of the feature image.
If the number of the feature images is multiple, for example, 8, the grayscale histograms of the 8 feature images are connected to obtain the gender feature of the target face.
In some embodiments, optionally, in step 28, an SVM (support vector machine) may be used to train the gender characteristics of the target face in all the face image samples, so as to obtain a trained face gender classifier. Optionally, the kernel function of the SVM is a linear kernel function.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a face gender recognition device of the present disclosure, in which the face gender recognition device 30 includes:
the face detector 31 is configured to perform face detection on an image to be recognized, and acquire a target face in the image to be recognized;
a face feature point extractor 32 for extracting face feature points from the target face;
a face interesting region obtainer 33, configured to obtain a face interesting region image according to the face feature points;
the image feature extractor 34 is configured to perform feature extraction on the face region-of-interest image to obtain a feature image;
a gray level compression processor 35, configured to perform gray level compression processing on the feature image, and obtain a gray level histogram of the image after gray level compression;
a gender feature obtainer 36, configured to obtain a gender feature of the target face according to the gray histogram;
and the gender identifier 37 is configured to input the gender characteristics of the target face into a trained face gender classifier to obtain a gender identification result of the target face.
Optionally, the face gender recognition device of the present disclosure further includes:
and the corrector is used for performing geometric correction on the target face according to the face characteristic points to obtain a corrected image, and in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, the face gender recognition device of the present disclosure further includes:
and the scaler is used for scaling the human face region-of-interest image into a specified size.
Optionally, the image feature extractor 34 is configured to perform feature extraction on the face region-of-interest image by using a coblp feature extraction algorithm to obtain a plurality of feature images; for example, 8 feature images are obtained. The number of feature images obtained in the present disclosure may be other numbers, and is not limited herein.
The gender feature obtaining device 36 is configured to connect the gray level histograms of the feature images to obtain a gender feature of the target face.
Alternatively, referring to fig. 6, the gray scale compression processor 35 includes:
a dividing unit 351, configured to divide the feature image into n × m image partitions with the same size, where n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
a compressing unit 352, configured to compress, for at least one of the image blocks, a gray level of the image block to a first value, so as to obtain a gray-level compressed image, where the first value is smaller than a gray level of the feature image;
an obtaining unit 353, configured to obtain a gray level histogram of the image after gray level compression;
the connecting unit 354 is configured to connect the grayscale histograms of the n × m blocks of the image blocks to obtain a grayscale histogram of the feature image.
Each device in the face gender identification apparatus in the above embodiments and the unit module included in each device may be implemented by hardware, for example, by a hardware circuit.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a training device of a face gender classifier according to the present disclosure, wherein the training device 40 of the face gender classifier includes:
a face image sample acquirer 41, configured to acquire a plurality of face image samples, where the plurality of face image samples include a plurality of images with male faces and a plurality of images with female faces;
a face detector 42, configured to perform face detection on the face image sample, and determine a position of a target face in the face image sample;
a face feature point extractor 43 for extracting face feature points of the target face;
a face interesting region obtainer 44, configured to obtain a face interesting region image according to the face feature points;
the image feature extractor 45 is configured to perform feature extraction on the face region-of-interest image to obtain a feature image;
a gray level compression processor 46, configured to perform gray level compression processing on the feature image, and obtain a gray level histogram of the image after gray level compression;
a gender feature obtainer 47, configured to obtain a gender feature of the target face according to the gray histogram;
and the trainer 48 is used for training the face gender classifier by adopting the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
Optionally, the training device of the face gender classifier of the present disclosure further includes:
and the corrector is used for performing geometric correction on the target face according to the face characteristic points to obtain a corrected image, and in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, the training device of the face gender classifier of the present disclosure further includes:
and the scaler is used for scaling the human face region-of-interest image into a specified size.
Optionally, the image feature extractor 43 is configured to perform feature extraction on the face region-of-interest image by using a coblp feature extraction algorithm to obtain a plurality of feature images; for example, 8 feature images are obtained. The number of feature images obtained in the present disclosure may be other numbers, and is not limited herein.
The gender feature obtainer 45 is configured to connect the gray level histograms of the plurality of feature images to obtain a gender feature of the target face.
Alternatively, referring to fig. 8, the gray scale compression processor 44 includes:
a dividing unit 441, configured to divide the feature image into n × m image partitions with the same size, where n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
a compressing unit 442, configured to compress, for at least one of the image blocks, a gray level of the image block to a first value, so as to obtain a gray-level compressed image, where the first value is smaller than a gray level of the feature image;
an acquiring unit 443 configured to acquire a grayscale histogram of the grayscale-compressed image;
and the connecting unit 444 is configured to connect the grayscale histograms of the n × m blocks of the image blocks to obtain a grayscale histogram of the feature image.
Each device in the training apparatus of the face gender classifier in the above embodiments and the unit modules included in each device may be implemented in a hardware manner, for example, implemented by a hardware circuit.
Referring to fig. 9, fig. 9 is another schematic structural diagram of a face gender identification device of the present disclosure, in which the face gender identification device 50 includes: a processor 51, a memory 52, and a computer program stored on the memory 112 and executable on the processor 51, the computer program when executed by the processor 51 implementing the steps of:
carrying out face detection on an image to be recognized to acquire a target face in the image to be recognized;
extracting the characteristic points of the human face of the target human face;
obtaining a face interesting region image according to the face characteristic points;
extracting the features of the face region-of-interest image to obtain a feature image;
performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
obtaining gender characteristics of the target face according to the gray level histogram;
and inputting the gender characteristics of the target face into a trained face gender classifier to obtain a gender identification result of the target face.
Optionally, the computer program when executed by the processor 51 may further implement the steps of:
after the facial feature points of the target face are extracted, before a face region-of-interest image is obtained according to the facial feature points, the method further includes:
and geometrically correcting the target face according to the face characteristic points to obtain a corrected image, wherein in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, the computer program when executed by the processor 51 may further implement the steps of:
the feature extraction is carried out on the face region-of-interest image to obtain a feature image, and the method also comprises the following steps:
and scaling the human face region-of-interest image to a specified size.
Optionally, the computer program when executed by the processor 51 may further implement the steps of:
the feature extraction of the face region-of-interest image to obtain a feature image comprises the following steps: performing feature extraction on the face region-of-interest image by adopting a symbiotic local binary pattern feature extraction algorithm to obtain a plurality of feature images;
the obtaining of the gender feature of the target face according to the gray level histogram comprises: and connecting the gray level histograms of the plurality of characteristic images to obtain the gender characteristics of the target face.
Optionally, the computer program when executed by the processor 51 may further implement the steps of:
the performing gray level compression processing on the characteristic image and obtaining a gray level histogram of the image after gray level compression includes:
dividing the characteristic image into n multiplied by m image blocks with the same size, wherein n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
for at least one image block, compressing the gray level of the image block to a first numerical value to obtain an image after gray level compression, wherein the first numerical value is smaller than the gray level of the characteristic image;
acquiring a gray level histogram of the image after gray level compression;
and connecting the gray level histograms of the n x m image blocks to obtain the gray level histogram of the characteristic image.
Referring to fig. 10, fig. 10 is a schematic structural diagram of another training apparatus for a face gender classifier according to the present disclosure, in which the face gender identification apparatus 60 includes: a processor 61, a memory 62, and a computer program stored on the memory 112 and executable on the processor 61, the computer program when executed by the processor 61 implementing the steps of:
acquiring a plurality of face image samples, wherein the face image samples comprise a plurality of images with male faces and a plurality of images with female faces;
carrying out face detection on the face image sample, and determining the position of a target face in the face image sample;
extracting the characteristic points of the human face of the target human face;
obtaining a face interesting region image according to the face characteristic points;
extracting the features of the face region-of-interest image to obtain a feature image;
performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
obtaining gender characteristics of the target face according to the gray level histogram;
and training a face gender classifier by adopting the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
Optionally, the computer program when executed by the processor 61 may further implement the steps of:
after the facial feature points of the target face are extracted, before a face region-of-interest image is obtained according to the facial feature points, the method further includes:
and geometrically correcting the target face according to the face characteristic points to obtain a corrected image, wherein in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
Optionally, the computer program when executed by the processor 61 may further implement the steps of:
the feature extraction is carried out on the face region-of-interest image to obtain a feature image, and the method also comprises the following steps:
and scaling the human face region-of-interest image to a specified size.
Optionally, the computer program when executed by the processor 61 may further implement the steps of:
the feature extraction of the face region-of-interest image to obtain a feature image comprises the following steps: performing feature extraction on the face region-of-interest image by using a CoLBP feature extraction algorithm to obtain a plurality of feature images;
the obtaining of the gender feature of the target face according to the gray level histogram comprises: and connecting the gray level histograms of the plurality of characteristic images to obtain the gender characteristics of the target face.
Optionally, the computer program when executed by the processor 61 may further implement the steps of:
the performing gray level compression processing on the characteristic image and obtaining a gray level histogram of the image after gray level compression includes:
dividing the characteristic image into n multiplied by m image blocks with the same size, wherein n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
for at least one image block, compressing the gray level of the image block to a first numerical value to obtain an image after gray level compression, wherein the first numerical value is smaller than the gray level of the characteristic image;
acquiring a gray level histogram of the image after gray level compression;
and connecting the gray level histograms of the n x m image blocks to obtain the gray level histogram of the characteristic image.
The present disclosure also provides a computer-readable storage medium, where a computer program is stored, and when being executed by a processor, the computer program implements each process of the above-mentioned embodiment of the face gender identification method, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
The present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements each process of the above-mentioned training method for a face gender classifier, and can achieve the same technical effect, and is not repeated here to avoid repetition.
The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present disclosure.
While the present disclosure has been described with reference to the embodiments illustrated in the drawings, which are intended to be illustrative rather than restrictive, it will be apparent to those of ordinary skill in the art in light of the present disclosure that many more modifications may be made without departing from the spirit of the disclosure and the scope of the appended claims.

Claims (10)

1. A face gender identification method comprises the following steps:
carrying out face detection on an image to be recognized to acquire a target face in the image to be recognized;
extracting the characteristic points of the human face of the target human face;
obtaining a face interesting region image according to the face characteristic points;
extracting the features of the face region-of-interest image to obtain a feature image;
performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
obtaining gender characteristics of the target face according to the gray level histogram;
and inputting the gender characteristics of the target face into a trained face gender classifier to obtain a gender identification result of the target face.
2. The method as claimed in claim 1, wherein after the extracting the face feature points of the target face, before obtaining a face-interesting-region image according to the face feature points, further comprises:
and geometrically correcting the target face according to the face characteristic points to obtain a corrected image, wherein in the corrected image, a connecting line between two eyes of the target face is a horizontal line.
3. The method as claimed in claim 1, wherein said extracting features of the face region-of-interest image to obtain a feature image further comprises:
and scaling the human face region-of-interest image to a specified size.
4. The method of claim 1, wherein,
the feature extraction of the face region-of-interest image to obtain a feature image comprises the following steps: performing feature extraction on the face region-of-interest image by adopting a symbiotic local binary pattern feature extraction algorithm to obtain a plurality of feature images;
the obtaining of the gender feature of the target face according to the gray level histogram comprises: and connecting the gray level histograms of the plurality of characteristic images to obtain the gender characteristics of the target face.
5. The method of claim 1, wherein performing a gray level compression process on the feature image and obtaining a gray level histogram of the gray level compressed image comprises:
dividing the characteristic image into n multiplied by m image blocks with the same size, wherein n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
for at least one image block, compressing the gray level of the image block to a first numerical value to obtain an image after gray level compression, wherein the first numerical value is smaller than the gray level of the characteristic image;
acquiring a gray level histogram of the image after gray level compression;
and connecting the gray level histograms of the n x m image blocks to obtain the gray level histogram of the characteristic image.
6. A training method of a face gender classifier comprises the following steps:
acquiring a plurality of face image samples, wherein the face image samples comprise a plurality of images with male faces and a plurality of images with female faces;
carrying out face detection on the face image sample, and determining the position of a target face in the face image sample;
extracting the characteristic points of the human face of the target human face;
obtaining a face interesting region image according to the face characteristic points;
extracting the features of the face region-of-interest image to obtain a feature image;
performing gray level compression processing on the characteristic image, and acquiring a gray level histogram of the image after gray level compression;
obtaining gender characteristics of the target face according to the gray level histogram;
and training a face gender classifier by adopting the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
7. A face gender identification device, comprising:
the face detector is used for carrying out face detection on an image to be recognized and acquiring a target face in the image to be recognized;
a face feature point extractor for extracting face feature points from the target face;
the face interesting region acquirer is used for acquiring a face interesting region image according to the face characteristic points;
the image feature extractor is used for extracting features of the face region-of-interest image to obtain a feature image;
the gray level compression processor is used for carrying out gray level compression processing on the characteristic image and acquiring a gray level histogram of the image after gray level compression;
a gender characteristic obtainer for obtaining gender characteristics of the target face according to the gray level histogram;
and the gender recognizer is used for inputting the gender characteristics of the target face into the trained face gender classifier to obtain the gender recognition result of the target face.
8. A training device of a face gender classifier comprises:
the face image sample acquirer is used for acquiring a plurality of face image samples, wherein the face image samples comprise a plurality of images with male faces and a plurality of images with female faces;
the face detector is used for carrying out face detection on the face image sample and determining the position of a target face in the face image sample;
a face feature point extractor for extracting face feature points from the target face;
the face interesting region acquirer is used for acquiring a face interesting region image according to the face characteristic points;
the image feature extractor is used for extracting features of the face region-of-interest image to obtain a feature image;
the gray level compression processor is used for carrying out gray level compression processing on the characteristic image and acquiring a gray level histogram of the image after gray level compression;
a gender characteristic obtainer for obtaining gender characteristics of the target face according to the gray level histogram;
and the trainer is used for training the face gender classifier by adopting the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
9. A face gender identification device, comprising a processor, a memory and a computer program stored on the memory and operable on the processor, wherein the computer program, when executed by the processor, implements the steps of the face gender identification method as claimed in any one of claims 1 to 5.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the face gender identification method according to any one of claims 1 to 5; alternatively, the computer program when being executed by a processor realizes the steps of the training method of the face gender classifier as claimed in claim 6.
CN201980001859.5A 2019-09-29 2019-09-29 Face gender identification method, and training method and device of face gender classifier Pending CN110785769A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109014 WO2021056531A1 (en) 2019-09-29 2019-09-29 Face gender recognition method, face gender classifier training method and device

Publications (1)

Publication Number Publication Date
CN110785769A true CN110785769A (en) 2020-02-11

Family

ID=69394840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980001859.5A Pending CN110785769A (en) 2019-09-29 2019-09-29 Face gender identification method, and training method and device of face gender classifier

Country Status (2)

Country Link
CN (1) CN110785769A (en)
WO (1) WO2021056531A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418957B (en) * 2021-12-24 2022-11-18 广州大学 Global and local binary pattern image crack segmentation method based on robot vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114198A1 (en) * 2010-11-08 2012-05-10 Yang Ting-Ting Facial image gender identification system and method thereof
CN103593648A (en) * 2013-10-22 2014-02-19 上海交通大学 Face recognition method for open environment
CN103914683A (en) * 2013-12-31 2014-07-09 闻泰通讯股份有限公司 Gender identification method and system based on face image
CN104598888A (en) * 2015-01-28 2015-05-06 广州远信网络科技发展有限公司 Human face gender recognition method
CN104778481A (en) * 2014-12-19 2015-07-15 五邑大学 Method and device for creating sample library for large-scale face mode analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114198A1 (en) * 2010-11-08 2012-05-10 Yang Ting-Ting Facial image gender identification system and method thereof
CN103593648A (en) * 2013-10-22 2014-02-19 上海交通大学 Face recognition method for open environment
CN103914683A (en) * 2013-12-31 2014-07-09 闻泰通讯股份有限公司 Gender identification method and system based on face image
CN104778481A (en) * 2014-12-19 2015-07-15 五邑大学 Method and device for creating sample library for large-scale face mode analysis
CN104598888A (en) * 2015-01-28 2015-05-06 广州远信网络科技发展有限公司 Human face gender recognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李昆仑;王命延;: "基于PCA与LBP的人脸性别分类方法" *
李昆仑;王命延;: "基于PCA与LBP的人脸性别分类方法", 电脑知识与技术, no. 28, pages 8023 - 8025 *
杜秀丽;张薇;顾斌斌;陈波;邱少明;: "基于灰度共生矩阵的图像自适应分块压缩感知方法", no. 08 *

Also Published As

Publication number Publication date
WO2021056531A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN106960202B (en) Smiling face identification method based on visible light and infrared image fusion
CN110147721B (en) Three-dimensional face recognition method, model training method and device
CN110569731B (en) Face recognition method and device and electronic equipment
CN111199230B (en) Method, device, electronic equipment and computer readable storage medium for target detection
KR20180109665A (en) A method and apparatus of image processing for object detection
JP6112801B2 (en) Image recognition apparatus and image recognition method
JP5671928B2 (en) Learning device, learning method, identification device, identification method, and program
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN112381061B (en) Facial expression recognition method and system
JP6756406B2 (en) Image processing equipment, image processing method and image processing program
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN112836625A (en) Face living body detection method and device and electronic equipment
CN114255468A (en) Handwriting recognition method and related equipment thereof
CN111178221A (en) Identity recognition method and device
CN111353385A (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN111985488B (en) Target detection segmentation method and system based on offline Gaussian model
CN116523916B (en) Product surface defect detection method and device, electronic equipment and storage medium
CN110785769A (en) Face gender identification method, and training method and device of face gender classifier
KR20180092453A (en) Face recognition method Using convolutional neural network and stereo image
Bae et al. Fingerprint image denoising and inpainting using convolutional neural network
CN111382741B (en) Method, system and equipment for detecting text in natural scene picture
JP2008152611A (en) Image recognition device, electronic equipment, image recognition method, and image recognition program
CN115984316B (en) Industrial image edge extraction method and device for complex environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination