WO2021056531A1 - Face gender recognition method, face gender classifier training method and device - Google Patents
Face gender recognition method, face gender classifier training method and device Download PDFInfo
- Publication number
- WO2021056531A1 WO2021056531A1 PCT/CN2019/109014 CN2019109014W WO2021056531A1 WO 2021056531 A1 WO2021056531 A1 WO 2021056531A1 CN 2019109014 W CN2019109014 W CN 2019109014W WO 2021056531 A1 WO2021056531 A1 WO 2021056531A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- feature
- gender
- gray
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012549 training Methods 0.000 title claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 69
- 230000006835 compression Effects 0.000 claims abstract description 68
- 238000007906 compression Methods 0.000 claims abstract description 68
- 238000012545 processing Methods 0.000 claims abstract description 32
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 241000282414 Homo sapiens Species 0.000 claims abstract description 12
- 230000001815 facial effect Effects 0.000 claims description 41
- 238000004590 computer program Methods 0.000 claims description 26
- 210000000887 face Anatomy 0.000 claims description 18
- 238000010586 diagram Methods 0.000 description 10
- 238000012937 correction Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 8
- 238000007781 pre-processing Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- the present disclosure relates to the field of image processing technology, and in particular to a method for facial gender recognition, a method and device for training a facial gender classifier.
- Human face is a very important biological feature of human beings. Face-based recognition function has gradually become the focus of research in recent years. Face gender recognition has broad development prospects and application value in pattern recognition, artificial intelligence, computer vision, information security, etc.
- the present disclosure provides a face gender recognition method, a face gender classifier training method and device.
- the present disclosure provides a face gender recognition method, including:
- the gender feature of the target face is input into the trained face gender classifier to obtain the gender recognition result of the target face.
- the method further includes:
- the facial feature points geometrically correct the target face to obtain a corrected image.
- the line between the two eyes of the target face is a horizontal line.
- the performing feature extraction on the image of the region of interest of the face to obtain a feature image also includes:
- the image of the region of interest of the face is scaled to a specified size.
- the performing feature extraction on the facial region of interest image to obtain a feature image includes: using a symbiotic local binary pattern feature extraction algorithm to perform feature extraction on the facial region of interest image to obtain multiple Feature images;
- the obtaining the gender feature of the target face according to the gray-scale histogram includes: connecting the gray-scale histograms of the multiple feature images to obtain the gender feature of the target face.
- the performing gray-scale compression processing on the characteristic image and obtaining the gray-scale histogram of the gray-scale compressed image includes:
- n is a positive integer greater than or equal to 2
- m is a positive integer greater than or equal to 2
- the present disclosure provides a method for training a face gender classifier, including:
- multiple face image samples where the multiple face image samples include multiple images with male faces and multiple images with female faces;
- the face gender classifier is trained by using the gender features of the target face in all the face image samples to obtain a trained face gender classifier.
- the method further includes:
- the facial feature points geometrically correct the target face to obtain a corrected image.
- the line between the two eyes of the target face is a horizontal line.
- the performing feature extraction on the image of the region of interest of the face to obtain a feature image also includes:
- the image of the region of interest of the face is scaled to a specified size.
- the performing feature extraction on the face region of interest image to obtain a feature image includes: using a CoLBP feature extraction algorithm to perform feature extraction on the face region of interest image to obtain multiple feature images;
- the obtaining the gender feature of the target face according to the gray-scale histogram includes: connecting the gray-scale histograms of the multiple feature images to obtain the gender feature of the target face.
- the performing gray-scale compression processing on the characteristic image and obtaining the gray-scale histogram of the gray-scale compressed image includes:
- n is a positive integer greater than or equal to 2
- m is a positive integer greater than or equal to 2
- the present disclosure provides a face gender recognition device, including:
- a face detector configured to perform face detection on the image to be recognized, and obtain the target face in the image to be recognized
- a face feature point extractor for extracting face feature points on the target face
- a face region of interest obtainer configured to obtain an image of a face region of interest according to the facial feature points
- An image feature extractor which is used to perform feature extraction on the image of the region of interest of the face to obtain a feature image
- a gray-scale compression processor configured to perform gray-scale compression processing on the characteristic image, and obtain a gray-scale histogram of the image after the gray-scale compression
- a gender feature obtainer configured to obtain the gender feature of the target face according to the grayscale histogram
- the gender recognizer is used to input the gender characteristics of the target face into the trained face gender classifier to obtain the gender recognition result of the target face.
- the present disclosure provides a training device for a face gender classifier, including:
- a face image sample acquirer configured to acquire multiple face image samples, the multiple face image samples including multiple images with male faces and multiple images with female faces;
- a face feature point extractor for extracting face feature points on the target face
- a face region of interest obtainer configured to obtain an image of a face region of interest according to the facial feature points
- An image feature extractor which is used to perform feature extraction on the image of the region of interest of the face to obtain a feature image
- a gray-scale compression processor configured to perform gray-scale compression processing on the characteristic image, and obtain a gray-scale histogram of the image after the gray-scale compression
- a gender feature obtainer configured to obtain the gender feature of the target face according to the grayscale histogram
- the trainer is used to train the face gender classifier by using the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
- the present disclosure provides a face gender recognition device, including a processor, a memory, and a computer program stored on the memory and running on the processor, and the computer program is executed by the processor.
- the steps of the above face gender recognition method are realized during execution.
- the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the aforementioned face gender recognition method are realized; or, When the computer program is executed by the processor, the steps of the training method of the face gender classifier are realized.
- FIG. 1 is a schematic flowchart of a face gender recognition method provided by some embodiments of the present disclosure
- FIG. 2 is a schematic flowchart of obtaining a grayscale histogram of an image after grayscale compression according to some embodiments of the present disclosure
- FIG. 3 is a schematic flowchart of a method for training a face gender classifier provided by some embodiments of the present disclosure
- FIG. 4 is a schematic flowchart of obtaining a grayscale histogram of an image after grayscale compression according to some other embodiments of the present disclosure
- FIG. 5 is a schematic structural diagram of a face gender recognition device provided by some embodiments of the present disclosure.
- FIG. 6 is a schematic structural diagram of a grayscale compression processor provided by some embodiments of the present disclosure.
- FIG. 7 is a schematic structural diagram of a training device for a face gender classifier provided by some embodiments of the present disclosure.
- FIG. 8 is a schematic structural diagram of a gray scale compression processor provided by some other embodiments of the present disclosure.
- FIG. 9 is another schematic structural diagram of a face gender recognition device provided by some embodiments of the present disclosure.
- FIG. 10 is a schematic diagram of another structure of a training device for a face gender classifier provided by some embodiments of the present disclosure.
- FIG. 1 is a schematic flowchart of a face gender recognition method provided by some embodiments of the present disclosure. The method includes:
- Step 11 Perform face detection on the image to be recognized, and obtain the target face in the image to be recognized;
- Face detection is to search for a face from an image and determine the position and size of the face.
- the Dlib library can be used for face detection.
- Dlib is a toolbox that contains machine learning algorithms and tools for creating complex software to solve real-world problems.
- the Dlib library includes tools for object detection in images, including frontal face detection and object pose estimation.
- the present disclosure can also use other algorithms for face detection.
- Step 12 Perform face feature point extraction on the target face
- the Dlib library can be used to extract 68 facial feature points.
- the number of facial feature points extracted in the present disclosure may be other numbers, which is not limited here.
- Step 13 Obtain a region of interest (ROI) image of the face according to the feature points of the face;
- ROI region of interest
- Step 14 Perform feature extraction on the image of the region of interest of the face to obtain a feature image
- Step 15 Perform gray-scale compression processing on the characteristic image, and obtain a gray-scale histogram of the image after gray-scale compression;
- the gray level of the feature image before compression is 256, that is, the range of gray value is 0 to 255
- the feature image is subjected to gray level compression processing, and the gray level of the compressed image is less than 256, for example, it is 16, that is, the image's gray level is less than 256.
- the range of gray value is 0-15.
- the gray level in the present disclosure can be in other ranges, which are not limited here.
- Step 16 Obtain the gender characteristics of the target face according to the grayscale histogram
- Step 17 Input the gender characteristics of the target face into the trained face gender classifier to obtain the gender recognition result of the target face.
- the feature image is subjected to gray-scale compression processing to reduce the dimensionality of the features in the feature image, thereby reducing the influence of the age, expression, and illumination of the target face on gender recognition, improving the recognition accuracy, and making
- the face gender recognition method is not only suitable for laboratory scenes, but can also be used in natural environments and other scenes.
- the recognition speed can also be effectively improved, and it is suitable for devices that require high recognition speed such as embedded devices without GPU (graphics processing unit).
- step 12 optionally, after extracting facial feature points on the target face in step 12, before obtaining a face region of interest image based on the facial feature points in step 13 , It also includes: preprocessing the image to be recognized.
- the preprocessing may include: filtering the image to be recognized.
- Image filtering is to suppress image noise while preserving the details of the image as much as possible. The quality of its processing effect will directly affect the effectiveness and reliability of subsequent image processing and analysis.
- a Gaussian filter may be used to filter the image to be recognized to remove image noise.
- the preprocessing may include: performing geometric correction on the target face according to the facial feature points to obtain a corrected image, and in the corrected image, the target face The line between the two eyes is a horizontal line.
- the affine transformation matrix may be determined according to the feature points of the face, and the geometric correction of the target face may be performed according to the affine transformation matrix.
- the target face detected from the image to be recognized may have a tilt problem. Therefore, the target face needs to be geometrically corrected so that the line between the two eyes in the corrected target face is a horizontal line, which can be used for subsequent
- the feature extraction process plays an optimization role.
- the coordinates of the feature points of the face in the corrected image it is also necessary to obtain the coordinates of the feature points of the face in the corrected image to determine the region of interest of the face.
- the highest point, lowest point, leftmost point, and rightmost point in the coordinates of the feature points of the face can be obtained, and the circumscribed matrix of the face can be determined, and a screenshot of the circumscribed matrix area can be taken to obtain the area of interest of the face image.
- performing feature extraction on the facial region of interest image in step 14 above to obtain a feature image also includes: scaling the facial region of interest image to a specified size. . That is, down-sampling the image of the area of interest of the face to make the image of the area of interest of the face have a fixed size, thereby reducing the impact of factors such as image resolution and the distance between the camera and the face on the face gender recognition, and improving the recognition method Robustness.
- performing feature extraction on the image of the region of interest of the face in step 14 to obtain a feature image includes: using a CoLBP (Co-occurrence Local Binary Pattern) feature extraction algorithm to perform Feature extraction is performed on the image of the region of interest of the face to obtain multiple feature images. For example, 8 feature images are obtained.
- the number of feature images obtained in the present disclosure may be other numbers, which is not limited here.
- the obtaining the gender characteristics of the target face according to the gray-level histogram in step 16 includes: connecting the gray-scale histograms of the multiple feature images to obtain the gender characteristics of the target face .
- the CoLBP feature extraction algorithm Compared with the existing CNN (Convolutional Neural Network) feature extraction algorithm, the CoLBP feature extraction algorithm has greatly reduced computational complexity, which can effectively improve the recognition speed. It is suitable for the recognition speed requirements of embedded devices without GPU. Higher equipment.
- the CoLBP feature extraction algorithm can be specifically as follows: first use multiple directional filters to extract the edge response feature images of the face region of interest image in multiple directions; and then obtain the LBP( Local binary mode) feature to obtain multiple feature images.
- eight directional filters are used to extract the edge response feature images of the image of the region of interest in the face in multiple directions; then, the edge response feature images in the eight directions are respectively calculated for LBP features to obtain multiple feature images.
- the eight directions may respectively include two directions on the horizontal center line of the image, two directions on the vertical center line, and four directions on the diagonal line.
- the use of 8 directional filters can provide more refined image edge response characteristics.
- the present disclosure does not exclude the use of 4 directional filters or more than 8 directional filters.
- Performing gray-scale compression processing on the feature image in step 15 above and obtaining a gray-scale histogram of the image after gray-scale compression includes:
- Step 151 Divide the characteristic image into n ⁇ m image blocks of the same size, where n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2; for example, n is 7, the value of m N can be the same or different.
- Step 152 For at least one of the image blocks, compress the gray level of the image block to a first value to obtain a gray level compressed image, where the first value is less than the gray level of the characteristic image level;
- the gray level of the characteristic image is, for example, 256, and the first value is, for example, 16.
- Step 153 Obtain a grayscale histogram of the image after grayscale compression
- the grayscale histogram is a normalized histogram.
- Step 154 Connect the gray histograms of the n ⁇ m image blocks to obtain the gray histogram of the characteristic image.
- the grayscale histograms of each image block of a feature image can be connected in the order from left to right and top to bottom to obtain the grayscale histogram of the feature image.
- the gray histograms of the eight feature images are connected to obtain the gender feature of the target face.
- FIG. 3 is a schematic flowchart of a method for training a face gender classifier according to some embodiments of the present disclosure. The method includes:
- Step 21 Obtain multiple face image samples, where the multiple face image samples include multiple images with male faces and multiple images with female faces;
- the number of images with male faces and images with female faces may be the same or approximately the same.
- the images with male faces and the images with female faces include images of people of different age groups, so that the obtained face gender classifier can identify people of different age groups.
- Step 22 Perform face detection on the face image sample, and determine the position of the target face in the face image sample
- Face detection is to search for a face from an image and determine the position and size of the face.
- the Dlib library can be used for face detection.
- Step 23 Perform face feature point extraction on the target face
- the Dlib library can be used to extract 68 facial feature points.
- the number of facial feature points extracted in the present disclosure can be other numbers, which is not limited here.
- Step 24 Obtain an image of the region of interest of the face according to the feature points of the face;
- Step 25 Perform feature extraction on the image of the region of interest of the face to obtain a feature image
- Step 26 Perform gray-scale compression processing on the characteristic image, and obtain a gray-scale histogram of the image after the gray-scale compression;
- the gray level of the feature image before compression is 256, that is, the range of gray value is 0 to 255
- the feature image is subjected to gray level compression processing, and the gray level of the compressed image is less than 256, for example, it is 16, that is, the image's gray level is less than 256.
- the range of gray value is 0-15.
- the gray level in the present disclosure can be in other ranges, which are not limited here.
- Step 27 Obtain the gender characteristics of the target face according to the grayscale histogram
- Step 28 Use the gender features of the target face in all the face image samples to train the face gender classifier to obtain a trained face gender classifier.
- the feature image is subjected to gray-scale compression processing to reduce the dimensionality of the features in the feature image, thereby reducing the influence of factors such as age, expression, and lighting of the target face on gender recognition, improving the recognition accuracy, so that
- the face gender classifier is not only suitable for laboratory scenes, but can also be used in natural environments and other scenes.
- step 24 optionally, after the face feature points are extracted on the target face in step 23, before the face region of interest image is obtained according to the face feature points in step 24, , It also includes: preprocessing the face image sample.
- the preprocessing may include: filtering the face image samples.
- Image filtering is to suppress image noise while preserving the details of the image as much as possible. The quality of its processing effect will directly affect the effectiveness and reliability of subsequent image processing and analysis.
- a Gaussian filter may be used to filter the face image samples to remove image noise.
- the preprocessing may include: performing geometric correction on the target face according to the facial feature points to obtain a corrected image, and in the corrected image, the target face The line between the two eyes is a horizontal line.
- the affine transformation matrix may be determined according to the feature points of the face, and the geometric correction of the target face may be performed according to the affine transformation matrix.
- the target face detected from the face image sample may have the problem of tilt. Therefore, the target face needs to be geometrically corrected so that the line between the two eyes in the corrected target face is a horizontal line, which can be corrected.
- the subsequent feature extraction process plays an optimization role.
- the coordinates of the feature points of the face in the corrected image it is also necessary to obtain the coordinates of the feature points of the face in the corrected image to determine the region of interest of the face.
- the highest point, lowest point, leftmost point, and rightmost point in the coordinates of the feature points of the face can be obtained, and the circumscribed matrix of the face can be determined, and a screenshot of the circumscribed matrix area can be taken to obtain the area of interest of the face image.
- performing feature extraction on the facial region of interest image in step 25 above to obtain a feature image also includes: scaling the facial region of interest image to a specified size. . That is, the image of the region of interest of the face is down-sampled to make the image of the region of interest of the face a fixed size, thereby reducing the impact of factors such as image resolution and the distance between the camera and the face on the face gender recognition, and improving the face Robustness of gender classifiers.
- performing feature extraction on the face region of interest image in step 25 to obtain a feature image includes: using a CoLBP feature extraction algorithm to perform feature extraction on the face region of interest image Perform feature extraction to obtain multiple feature images. For example, 8 feature images are obtained. The number of feature images obtained in the present disclosure may be other numbers, which is not limited here.
- obtaining the gender characteristics of the target face according to the grayscale histogram includes: connecting the grayscale histograms of the multiple feature images to obtain the gender characteristics of the target face .
- the CoLBP feature extraction algorithm Compared with the existing CNN feature extraction algorithm, the CoLBP feature extraction algorithm has a greatly reduced computational complexity, which can effectively improve the recognition speed. It is suitable for devices that require high recognition speed such as embedded devices without GPU.
- the CoLBP feature extraction algorithm can be specifically as follows: first use multiple directional filters to extract the edge response feature images of the face region of interest image in multiple directions; and then obtain the LBP features of the edge response feature images in the multiple directions. , Get multiple feature images.
- performing gray-scale compression processing on the feature image in step 26 and obtaining a gray-scale histogram of the image after gray-scale compression includes:
- Step 261 Divide the characteristic image into n ⁇ m image blocks of the same size, where n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2; for example, n is 7, the value of m It can be the same as n or different.
- Step 262 For at least one of the image blocks, compress the gray level of the image block to a first value to obtain a gray level compressed image, where the first value is less than the gray level of the characteristic image level;
- the gray level of the characteristic image is, for example, 256, and the first value is, for example, 16.
- Step 263 Obtain a grayscale histogram of the image after grayscale compression
- the grayscale histogram is a normalized histogram.
- Step 264 Connect the gray histograms of the n ⁇ m image blocks to obtain the gray histogram of the characteristic image.
- the grayscale histograms of each image block of a feature image can be connected in the order from left to right and top to bottom to obtain the grayscale histogram of the feature image.
- the gray histograms of the eight feature images are connected to obtain the gender feature of the target face.
- SVM Small Vector Machine
- the kernel function of the SVM adopts a linear kernel function.
- FIG. 5 is a schematic structural diagram of the face gender recognition device of the present disclosure.
- the face gender recognition device 30 includes:
- the face detector 31 is configured to perform face detection on the image to be recognized, and obtain the target face in the image to be recognized;
- the face feature point extractor 32 is configured to perform face feature point extraction on the target face
- the face interest area obtainer 33 is configured to obtain an image of the face interest area according to the face feature points;
- the image feature extractor 34 is configured to perform feature extraction on the image of the region of interest of the face to obtain a feature image
- the gray level compression processor 35 is configured to perform gray level compression processing on the characteristic image, and obtain a gray level histogram of the image after the gray level compression;
- the gender feature obtainer 36 is configured to obtain the gender feature of the target face according to the gray histogram
- the gender recognizer 37 is configured to input the gender characteristics of the target face into the trained face gender classifier to obtain the gender recognition result of the target face.
- the face gender recognition device of the present disclosure further includes:
- the corrector is used to perform geometric correction on the target face according to the face feature points to obtain a corrected image.
- the connection between the two eyes of the target face The line is a horizontal line.
- the face gender recognition device of the present disclosure further includes:
- the scaler is used to scale the image of the region of interest of the face to a specified size.
- the image feature extractor 34 is configured to use the CoLBP feature extraction algorithm to perform feature extraction on the image of the region of interest of the face to obtain multiple feature images; for example, to obtain 8 feature images.
- the number of feature images obtained in the present disclosure may be other numbers, which is not limited here.
- the gender feature obtainer 36 is used to connect the gray histograms of the multiple feature images to obtain the gender feature of the target face.
- the gray scale compression processor 35 includes:
- the dividing unit 351 is configured to divide the characteristic image into n ⁇ m image blocks of the same size, where n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
- the compression unit 352 is configured to compress the gray scale of the image block to a first value for at least one of the image blocks to obtain a gray scale compressed image, where the first value is smaller than the characteristic image The gray level;
- the obtaining unit 353 is configured to obtain the grayscale histogram of the grayscale compressed image
- the connecting unit 354 is configured to connect the grayscale histograms of the n ⁇ m image blocks to obtain the grayscale histogram of the characteristic image.
- Each device in the face gender recognition device in the foregoing embodiment and the unit modules included in each device can be implemented by hardware, for example, by a hardware circuit.
- FIG. 7 is a schematic structural diagram of a training device for a face gender classifier of the present disclosure.
- the training device 40 for a face gender classifier includes:
- the face image sample acquirer 41 is configured to acquire multiple face image samples, where the multiple face image samples include multiple images with male faces and multiple images with female faces;
- the face detector 42 is configured to perform face detection on the face image sample, and determine the position of the target face in the face image sample;
- the face feature point extractor 43 is configured to perform face feature point extraction on the target face
- the face region of interest obtainer 44 is configured to obtain an image of the face region of interest according to the facial feature points;
- the image feature extractor 45 is configured to perform feature extraction on the image of the region of interest of the face to obtain a feature image
- the gray level compression processor 46 is configured to perform gray level compression processing on the characteristic image, and obtain a gray level histogram of the image after the gray level compression;
- the gender feature obtainer 47 is configured to obtain the gender feature of the target face according to the grayscale histogram
- the trainer 48 is configured to train the face gender classifier by using the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
- the training device for the face gender classifier of the present disclosure further includes:
- the corrector is used to perform geometric correction on the target face according to the face feature points to obtain a corrected image.
- the connection between the two eyes of the target face The line is a horizontal line.
- the training device for the face gender classifier of the present disclosure further includes:
- the scaler is used to scale the image of the region of interest of the face to a specified size.
- the image feature extractor 43 is configured to use the CoLBP feature extraction algorithm to perform feature extraction on the image of the region of interest of the face to obtain multiple feature images; for example, to obtain 8 feature images.
- the number of feature images obtained in the present disclosure may be other numbers, which is not limited here.
- the gender feature obtainer 45 is used to connect the gray histograms of the multiple feature images to obtain the gender feature of the target face.
- the gray-scale compression processor 44 includes:
- the dividing unit 441 is configured to divide the characteristic image into n ⁇ m image blocks of the same size, where n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;
- the compression unit 442 is configured to compress the gray scale of the image block to a first value for at least one of the image blocks to obtain a gray scale compressed image, where the first value is smaller than the characteristic image The gray level;
- the obtaining unit 443 is configured to obtain the grayscale histogram of the grayscale compressed image
- the connecting unit 444 is configured to connect the gray histograms of the n ⁇ m image blocks to obtain the gray histogram of the characteristic image.
- the various components of the training device for the face gender classifier in the foregoing embodiment and the unit modules included in each component can be implemented in a hardware manner, for example, through a hardware circuit.
- FIG. 9 is another structural diagram of the facial gender recognition device of the present disclosure.
- the facial gender recognition device 50 includes a processor 51, a memory 52, and is stored in the memory 112 and can be stored in the processor.
- the computer program running on 51 implements the following steps when the computer program is executed by the processor 51:
- the gender feature of the target face is input into the trained face gender classifier to obtain the gender recognition result of the target face.
- the method further includes:
- the facial feature points geometrically correct the target face to obtain a corrected image.
- the line between the two eyes of the target face is a horizontal line.
- the feature extraction on the image of the region of interest of the face to obtain a feature image also includes:
- the image of the region of interest of the face is scaled to a specified size.
- the performing feature extraction on the image of the region of interest of the face to obtain a feature image includes: using a symbiotic local binary pattern feature extraction algorithm to perform feature extraction on the image of the region of interest of the face to obtain multiple feature images;
- the obtaining the gender feature of the target face according to the gray-scale histogram includes: connecting the gray-scale histograms of the multiple feature images to obtain the gender feature of the target face.
- n is a positive integer greater than or equal to 2
- m is a positive integer greater than or equal to 2
- FIG. 10 is another structural diagram of the training device for the face gender classifier of the present disclosure.
- the face gender recognition device 60 includes a processor 61, a memory 62, and is stored on the memory 112 and can be
- the computer program running on the processor 61 implements the following steps when the computer program is executed by the processor 61:
- multiple face image samples where the multiple face image samples include multiple images with male faces and multiple images with female faces;
- the face gender classifier is trained by using the gender features of the target face in all the face image samples to obtain a trained face gender classifier.
- the method further includes:
- the facial feature points geometrically correct the target face to obtain a corrected image.
- the line between the two eyes of the target face is a horizontal line.
- the feature extraction on the image of the region of interest of the face to obtain a feature image also includes:
- the image of the region of interest of the face is scaled to a specified size.
- the performing feature extraction on the image of the region of interest of the human face to obtain a feature image includes: using a CoLBP feature extraction algorithm to perform feature extraction on the image of the region of interest of the human face to obtain multiple feature images;
- the obtaining the gender feature of the target face according to the gray-scale histogram includes: connecting the gray-scale histograms of the multiple feature images to obtain the gender feature of the target face.
- n is a positive integer greater than or equal to 2
- m is a positive integer greater than or equal to 2
- the present disclosure also provides a computer-readable storage medium on which a computer program is stored.
- a computer program is stored.
- the computer program is executed by a processor, each process of the above-mentioned face gender recognition method embodiment is realized, and can achieve The same technical effect, in order to avoid repetition, will not be repeated here.
- the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, each process of the above-mentioned method for training a face gender classifier is realized, And can achieve the same technical effect, in order to avoid repetition, I will not repeat them here.
- the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.
- the technical solution of the present disclosure essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present disclosure.
- a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种人脸性别识别方法,包括:A face gender recognition method, including:对待识别图像进行人脸检测,获取所述待识别图像中的目标人脸;Performing face detection on the image to be recognized, and acquiring the target face in the image to be recognized;对所述目标人脸进行人脸特征点提取;Performing face feature point extraction on the target face;根据所述人脸特征点,获得人脸感兴趣区域图像;Obtaining an image of a region of interest of a human face according to the facial feature points;对所述人脸感兴趣区域图像进行特征提取,得到特征图像;Performing feature extraction on the image of the region of interest of the face to obtain a feature image;对所述特征图像进行灰度级压缩处理,并获取灰度级压缩后的图像的灰度直方图;Performing gray-scale compression processing on the characteristic image, and obtaining a gray-scale histogram of the image after the gray-scale compression;根据所述灰度直方图得到所述目标人脸的性别特征;Obtaining the gender feature of the target face according to the grayscale histogram;将所述目标人脸的性别特征输入至已训练的人脸性别分类器中,得到所述目标人脸的性别识别结果。The gender feature of the target face is input into the trained face gender classifier to obtain the gender recognition result of the target face.
- 如权利要求1所述的方法,其中,所述对所述目标人脸进行人脸特征点提取之后,根据所述人脸特征点,获得人脸感兴趣区域图像之前,还包括:The method according to claim 1, wherein, after extracting the facial feature points of the target face, before obtaining the image of the region of interest of the face according to the facial feature points, the method further comprises:根据所述人脸特征点,对所述目标人脸进行几何校正,得到校正后的图像,所述校正后的图像中,所述目标人脸的两个眼睛之间的连线为水平线。According to the facial feature points, geometrically correct the target face to obtain a corrected image. In the corrected image, the line between the two eyes of the target face is a horizontal line.
- 如权利要求1所述的方法,其中,所述对所述人脸感兴趣区域图像进行特征提取,得到特征图像,之前还包括:The method according to claim 1, wherein said performing feature extraction on the image of the region of interest of the face to obtain a feature image further comprises:将所述人脸感兴趣区域图像缩放为指定尺寸。The image of the region of interest of the face is scaled to a specified size.
- 如权利要求1所述的方法,其中,The method of claim 1, wherein:所述对所述人脸感兴趣区域图像进行特征提取,得到特征图像,包括:采用共生局部二值模式特征提取算法,对所述人脸感兴趣区域图像进行特征提取,得到多个特征图像;The performing feature extraction on the image of the region of interest of the face to obtain a feature image includes: using a symbiotic local binary pattern feature extraction algorithm to perform feature extraction on the image of the region of interest of the face to obtain multiple feature images;所述根据所述灰度直方图得到所述目标人脸的性别特征包括:连接所述多个特征图像的灰度直方图,得到所述目标人脸的性别特征。The obtaining the gender feature of the target face according to the gray-scale histogram includes: connecting the gray-scale histograms of the multiple feature images to obtain the gender feature of the target face.
- 如权利要求1所述的方法,其中,所述对所述特征图像进行灰度级压缩处理,并获取灰度级压缩后的图像的灰度直方图,包括:The method according to claim 1, wherein the performing gray-scale compression processing on the characteristic image and obtaining a gray-scale histogram of the image after the gray-scale compression includes:将所述特征图像划分为n×m个尺寸相同的图像分块,n为大于或等于2的正整数,m为大于或等于2的正整数;Divide the characteristic image into n×m image blocks of the same size, where n is a positive integer greater than or equal to 2, and m is a positive integer greater than or equal to 2;针对至少一个所述图像分块,将所述图像分块的灰度级压缩到第一数值,得到灰度级压缩后的图像,所述第一数值小于所述特征图像的灰度级;For at least one of the image blocks, compress the gray level of the image block to a first value to obtain a gray level compressed image, where the first value is less than the gray level of the characteristic image;获取灰度级压缩后的图像的灰度直方图;Obtain a grayscale histogram of the image after grayscale compression;连接n×m块图像分块的灰度直方图,得到所述特征图像的灰度直方图。Connect the gray histograms of the n×m image blocks to obtain the gray histogram of the characteristic image.
- 一种人脸性别分类器的训练方法,其中,包括:A method for training a face gender classifier, which includes:获取多张人脸图像样本,所述多张人脸图像样本中包括多张具有男性人脸的图像以及多张具有女性人脸的图像;Acquiring multiple face image samples, where the multiple face image samples include multiple images with male faces and multiple images with female faces;对所述人脸图像样本进行人脸检测,确定所述人脸图像样本中目标人脸的位置;Performing face detection on the face image sample, and determining the position of the target face in the face image sample;对所述目标人脸进行人脸特征点提取;Performing face feature point extraction on the target face;根据所述人脸特征点,获得人脸感兴趣区域图像;Obtaining an image of a region of interest of a human face according to the facial feature points;对所述人脸感兴趣区域图像进行特征提取,得到特征图像;Performing feature extraction on the image of the region of interest of the face to obtain a feature image;对所述特征图像进行灰度级压缩处理,并获取灰度级压缩后的图像的灰度直方图;Performing gray-scale compression processing on the characteristic image, and obtaining a gray-scale histogram of the image after the gray-scale compression;根据所述灰度直方图得到所述目标人脸的性别特征;Obtaining the gender feature of the target face according to the grayscale histogram;采用所有所述人脸图像样本中的目标人脸的性别特征对人脸性别分类器进行训练,得到已训练的人脸性别分类器。The face gender classifier is trained by using the gender features of the target face in all the face image samples to obtain a trained face gender classifier.
- 一种人脸性别识别装置,其中,包括:A face gender recognition device, which includes:人脸检测器,用于对待识别图像进行人脸检测,获取所述待识别图像中的目标人脸;A face detector, configured to perform face detection on the image to be recognized, and obtain the target face in the image to be recognized;人脸特征点提取器,用于对所述目标人脸进行人脸特征点提取;A face feature point extractor for extracting face feature points on the target face;人脸感兴趣区域获得器,用于根据所述人脸特征点,获得人脸感兴趣区域图像;A face region of interest obtainer, configured to obtain an image of a face region of interest according to the facial feature points;图像特征提取器,用于对所述人脸感兴趣区域图像进行特征提取,得到特征图像;An image feature extractor, which is used to perform feature extraction on the image of the region of interest of the face to obtain a feature image;灰度级压缩处理器,用于对所述特征图像进行灰度级压缩处理,并获取灰度级压缩后的图像的灰度直方图;A gray-scale compression processor, configured to perform gray-scale compression processing on the characteristic image, and obtain a gray-scale histogram of the image after the gray-scale compression;性别特征获得器,用于根据所述灰度直方图得到所述目标人脸的性别特征;A gender feature obtainer, configured to obtain the gender feature of the target face according to the grayscale histogram;性别识别器,用于将所述目标人脸的性别特征输入至已训练的人脸性别分类器中,得到所述目标人脸的性别识别结果。The gender recognizer is used to input the gender characteristics of the target face into the trained face gender classifier to obtain the gender recognition result of the target face.
- 一种人脸性别分类器的训练装置,其中,包括:A training device for a face gender classifier, which includes:人脸图像样本获取器,用于获取多张人脸图像样本,所述多张人脸图像样本中包括多张具有男性人脸的图像以及多张具有女性人脸的图像;A face image sample acquirer, configured to acquire multiple face image samples, the multiple face image samples including multiple images with male faces and multiple images with female faces;人脸检测器,用于对所述人脸图像样本进行人脸检测,确定所述人脸图像样本中目标人脸的位置;A face detector, configured to perform face detection on the face image sample, and determine the position of the target face in the face image sample;人脸特征点提取器,用于对所述目标人脸进行人脸特征点提取;A face feature point extractor for extracting face feature points on the target face;人脸感兴趣区域获得器,用于根据所述人脸特征点,获得人脸感兴趣区域图像;A face region of interest obtainer, configured to obtain an image of a face region of interest according to the facial feature points;图像特征提取器,用于对所述人脸感兴趣区域图像进行特征提取,得到特征图像;An image feature extractor, which is used to perform feature extraction on the image of the region of interest of the face to obtain a feature image;灰度级压缩处理器,用于对所述特征图像进行灰度级压缩处理,并获取灰度级压缩后的图像的灰度直方图;A gray-scale compression processor, configured to perform gray-scale compression processing on the characteristic image, and obtain a gray-scale histogram of the image after the gray-scale compression;性别特征获得器,用于根据所述灰度直方图得到所述目标人脸的性别特征;A gender feature obtainer, configured to obtain the gender feature of the target face according to the grayscale histogram;训练器,用于采用所有所述人脸图像样本中的目标人脸的性别特征对人脸性别分类器进行训练,得到已训练的人脸性别分类器。The trainer is used to train the face gender classifier by using the gender characteristics of the target face in all the face image samples to obtain the trained face gender classifier.
- 一种人脸性别识别装置,其中,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至5中任一项所述的人脸性别识别方法的步骤。A face gender recognition device, which includes a processor, a memory, and a computer program stored on the memory and capable of running on the processor. When the computer program is executed by the processor, the implementation is as claimed in the claims. Steps of the face gender recognition method described in any one of 1 to 5.
- 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至5中任一项所述的人脸性别识别方法的步骤;或者,所述计算机程序被处理器执行时实现如权利要求6所述的人脸性别分类器的训练方法的步骤。A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the face gender recognition method according to any one of claims 1 to 5 is realized Or, when the computer program is executed by a processor, the steps of the method for training a face gender classifier according to claim 6 are realized.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/109014 WO2021056531A1 (en) | 2019-09-29 | 2019-09-29 | Face gender recognition method, face gender classifier training method and device |
CN201980001859.5A CN110785769A (en) | 2019-09-29 | 2019-09-29 | Face gender identification method, and training method and device of face gender classifier |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/109014 WO2021056531A1 (en) | 2019-09-29 | 2019-09-29 | Face gender recognition method, face gender classifier training method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021056531A1 true WO2021056531A1 (en) | 2021-04-01 |
Family
ID=69394840
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/109014 WO2021056531A1 (en) | 2019-09-29 | 2019-09-29 | Face gender recognition method, face gender classifier training method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110785769A (en) |
WO (1) | WO2021056531A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418957A (en) * | 2021-12-24 | 2022-04-29 | 广州大学 | Global and local binary pattern image crack segmentation method based on robot vision |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120114198A1 (en) * | 2010-11-08 | 2012-05-10 | Yang Ting-Ting | Facial image gender identification system and method thereof |
CN103914683A (en) * | 2013-12-31 | 2014-07-09 | 闻泰通讯股份有限公司 | Gender identification method and system based on face image |
CN104598888A (en) * | 2015-01-28 | 2015-05-06 | 广州远信网络科技发展有限公司 | Human face gender recognition method |
CN104778481A (en) * | 2014-12-19 | 2015-07-15 | 五邑大学 | Method and device for creating sample library for large-scale face mode analysis |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593648B (en) * | 2013-10-22 | 2017-01-18 | 上海交通大学 | Face recognition method for open environment |
-
2019
- 2019-09-29 CN CN201980001859.5A patent/CN110785769A/en active Pending
- 2019-09-29 WO PCT/CN2019/109014 patent/WO2021056531A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120114198A1 (en) * | 2010-11-08 | 2012-05-10 | Yang Ting-Ting | Facial image gender identification system and method thereof |
CN103914683A (en) * | 2013-12-31 | 2014-07-09 | 闻泰通讯股份有限公司 | Gender identification method and system based on face image |
CN104778481A (en) * | 2014-12-19 | 2015-07-15 | 五邑大学 | Method and device for creating sample library for large-scale face mode analysis |
CN104598888A (en) * | 2015-01-28 | 2015-05-06 | 广州远信网络科技发展有限公司 | Human face gender recognition method |
Non-Patent Citations (1)
Title |
---|
LI KUN-LUN,WANG MING-YAN: "Gender Classification Based on PCA and LBP", COMPUTER KNOWLEDGE AND TECHNOLOGY, vol. 5, no. 28, 5 October 2009 (2009-10-05), pages 8023 - 8025, XP055794293, ISSN: 1009-3044 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418957A (en) * | 2021-12-24 | 2022-04-29 | 广州大学 | Global and local binary pattern image crack segmentation method based on robot vision |
Also Published As
Publication number | Publication date |
---|---|
CN110785769A (en) | 2020-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11908244B2 (en) | Human posture detection utilizing posture reference maps | |
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN111401257B (en) | Face recognition method based on cosine loss under non-constraint condition | |
US11151363B2 (en) | Expression recognition method, apparatus, electronic device, and storage medium | |
CN108334848B (en) | Tiny face recognition method based on generation countermeasure network | |
KR102641115B1 (en) | A method and apparatus of image processing for object detection | |
WO2020103700A1 (en) | Image recognition method based on micro facial expressions, apparatus and related device | |
WO2016149944A1 (en) | Face recognition method and system, and computer program product | |
WO2016138838A1 (en) | Method and device for recognizing lip-reading based on projection extreme learning machine | |
CN112381061B (en) | Facial expression recognition method and system | |
CN105335725A (en) | Gait identification identity authentication method based on feature fusion | |
CN108416291B (en) | Face detection and recognition method, device and system | |
CN110472625B (en) | Chinese chess piece visual identification method based on Fourier descriptor | |
CN104123543A (en) | Eyeball movement identification method based on face identification | |
WO2018100668A1 (en) | Image processing device, image processing method, and image processing program | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN113011253B (en) | Facial expression recognition method, device, equipment and storage medium based on ResNeXt network | |
CN111860046A (en) | Facial expression recognition method for improving MobileNet model | |
CN108229432A (en) | Face calibration method and device | |
Mali et al. | Indian sign language recognition using SVM classifier | |
Devadethan et al. | Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing | |
CN110598647A (en) | Head posture recognition method based on image recognition | |
CN111881732B (en) | SVM (support vector machine) -based face quality evaluation method | |
CN116523916B (en) | Product surface defect detection method and device, electronic equipment and storage medium | |
WO2021056531A1 (en) | Face gender recognition method, face gender classifier training method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19946793 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19946793 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19946793 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19946793 Country of ref document: EP Kind code of ref document: A1 |