CN110728193B - Method and device for detecting richness characteristics of face image - Google Patents

Method and device for detecting richness characteristics of face image Download PDF

Info

Publication number
CN110728193B
CN110728193B CN201910872851.8A CN201910872851A CN110728193B CN 110728193 B CN110728193 B CN 110728193B CN 201910872851 A CN201910872851 A CN 201910872851A CN 110728193 B CN110728193 B CN 110728193B
Authority
CN
China
Prior art keywords
facial
image
sample image
face
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910872851.8A
Other languages
Chinese (zh)
Other versions
CN110728193A (en
Inventor
徐伟
罗琨
陈晓磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianshang Xinchang Network Technology Co Ltd
Original Assignee
Lianshang Xinchang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianshang Xinchang Network Technology Co Ltd filed Critical Lianshang Xinchang Network Technology Co Ltd
Priority to CN201910872851.8A priority Critical patent/CN110728193B/en
Publication of CN110728193A publication Critical patent/CN110728193A/en
Application granted granted Critical
Publication of CN110728193B publication Critical patent/CN110728193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The method comprises the steps of carrying out face detection on at least one obtained face sample image to obtain a region to be detected of each image in the at least one face sample image; respectively carrying out facial feature extraction and facial expression recognition on the to-be-detected region of each image in at least one facial sample image to obtain facial feature information and facial expression information of each image in at least one facial sample image; feature fusion is carried out on the facial feature information and facial expression information of each image in at least one facial sample image to obtain the richness features of at least one facial sample image, so that the sample quality of the facial image used for training can be evaluated in the subsequent training of the facial image replacement model, the trained facial image replacement model can be effectively attached to various facial poses, expressions and the like of the face used for replacement, and therefore face replacement experience of a user is improved.

Description

Method and device for detecting richness characteristics of face image
Technical Field
The application relates to the technical field of image processing, in particular to a method and equipment for detecting richness characteristics of face images.
Background
The face replacement technology is an important research direction in the field of computer vision, and has great influence on business, entertainment and some special industries due to various defects of manual image editing and fusion by software such as photoshop and the like. In the existing deep face recognition technology (such as deep face changing), an effective sample quality evaluation system (such as sample richness) is not established for a batch of face samples provided by a user or a third party, the reliability of a final synthesized sample is difficult to guarantee, various facial gestures, expressions and the like of a target face in a template sample cannot be effectively attached to a training model easily, the final effect is poor, and therefore face changing experience is reduced.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for detecting richness features of a face image, so as to solve the problem in the prior art that richness features of a face are lacking in a face recognition process.
According to one aspect of the application, a method for detecting richness features of a face image is provided, wherein the method comprises the following steps:
acquiring at least one face sample image;
performing face detection on the at least one face sample image to obtain a region to be detected of each image in the at least one face sample image;
extracting facial features of a region to be detected of each image in the at least one facial sample image to obtain facial feature information of each image in the at least one facial sample image;
performing facial expression recognition on the to-be-detected region of each image in the at least one facial sample image to obtain facial expression information of each image in the at least one facial sample image;
and performing feature fusion on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness features of the at least one facial sample image.
Further, in the above method, the facial feature information includes the facial feature point information and facial angle information, wherein,
the extracting facial features of the to-be-detected region of each image in the at least one facial sample image to obtain the facial feature information of each image in the at least one facial sample image includes:
extracting facial feature points of a region to be detected of each image in the at least one facial sample image to obtain facial feature point information of each image in the at least one facial sample image;
carrying out face angle recognition on the face characteristic point information of each image in the at least one face sample image to obtain face angle information of each image in the at least one face sample image;
the feature fusion is performed on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness features of the at least one facial sample image, and the feature fusion includes:
and performing feature fusion on the facial feature point information, the facial angle information and the facial expression information of each image in the at least one facial sample image to obtain the richness features of the at least one facial sample image.
Further, in the above method, the extracting facial feature points of the region to be detected of each image in the at least one facial sample image to obtain facial feature point information of each image in the at least one facial sample image includes:
acquiring a key point positioning model for detecting facial features, wherein the key point positioning model is obtained by training a local binarization feature algorithm and a random forest algorithm;
and extracting facial feature points of the to-be-detected region of each image in the at least one facial sample image through the key point positioning model to obtain facial feature point information of each image in the at least one facial sample image.
Further, in the above method, the performing face angle recognition on the facial feature point information of each image in the at least one face sample image to obtain the face angle information of each image in the at least one face sample image includes:
acquiring a face angle recognition model for recognizing a face angle;
and carrying out face angle recognition on the face characteristic point information of each image in the at least one face sample image through the face angle recognition model to obtain the face angle information of each image in the at least one face sample image.
Further, in the above method, the performing facial expression recognition on the to-be-detected region of each image in the at least one facial sample image to obtain facial expression information of each image in the at least one facial sample image includes:
acquiring a facial expression recognition model for recognizing facial expressions, wherein the facial expression recognition model is obtained by training a convolutional neural network based on deep learning;
and carrying out facial expression recognition on the to-be-detected region of each image in the at least one facial sample image through the facial expression recognition model to obtain facial expression information of each image in the at least one facial sample image.
Further, in the above method, the performing feature fusion on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image includes:
fusing facial feature information and facial expression information of each image in the at least one facial sample image to obtain richness features of each image in the at least one facial sample image;
and obtaining the richness characteristics of the at least one face sample image according to the richness characteristics of each image in the at least one face sample image.
Further, in the above method, the performing feature fusion on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image includes:
obtaining facial feature information of the at least one facial sample image according to the facial feature information of each image in the at least one facial sample image;
obtaining facial expression information of the at least one face sample image according to facial expression information of each image in the at least one face sample image;
and fusing the facial feature information and facial expression information of the at least one facial sample image to obtain the richness feature of the at least one facial sample image.
Further, in the above method, the method further includes:
carrying out validity judgment on the corresponding face sample image based on the to-be-detected region,
and if the face sample image corresponding to the area to be detected is an effective face sample image, performing facial feature extraction and facial expression recognition on the area to be detected of the face sample image.
Further, in the above method, the determining the validity of the corresponding face sample image based on the to-be-detected region includes:
acquiring content information, pixel information and size information of the area to be detected;
and judging the effectiveness of the corresponding face sample image based on the content information, the pixel information and the size information of the region to be detected.
According to another aspect of the present application, there is also provided a face image richness feature detection apparatus, wherein the apparatus includes:
one or more processors;
a non-volatile storage medium for storing one or more computer-readable instructions,
when executed by the one or more processors, the one or more computer-readable instructions cause the one or more processors to implement a method for detecting richness features of facial images as described above.
According to another aspect of the present application, there is also provided a face image richness feature detection apparatus, wherein the apparatus includes:
one or more processors;
a non-volatile storage medium to store one or more computer-readable instructions that,
when executed by the one or more processors, cause the one or more processors to implement a method for detecting richness features of facial images as described above.
Compared with the prior art, the method comprises the steps of obtaining at least one face sample image; performing face detection on the at least one face sample image to obtain a region to be detected of each image in the at least one face sample image; extracting facial features of a region to be detected of each image in the at least one facial sample image to obtain facial feature information of each image in the at least one facial sample image; performing facial expression recognition on the to-be-detected region of each image in the at least one facial sample image to obtain facial expression information of each image in the at least one facial sample image; feature fusion is carried out on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image, detection of the richness feature of the at least one facial sample image is achieved, and therefore the sample quality of the sample facial image used for training the facial image replacement model can be evaluated in the subsequent training process of the facial image replacement model, the trained facial image replacement model can be effectively attached to various facial poses, expressions and the like of a target facial image used for replacement, and therefore face changing experience of a user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method for detecting richness features of a facial image in accordance with an aspect of the subject application;
fig. 2 is a schematic diagram illustrating an actual application scenario of a method for detecting richness features of a face image according to an aspect of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As shown in fig. 1, a schematic flow chart of a method for detecting richness features of a face image according to an aspect of the present application includes step S11, step S12, step S13, step S14, and step S15, where the method specifically includes:
step S11, acquiring at least one face sample image; here, the face sample image is used for a face image indicating a face sample of which feature information of the face image needs to be trained, and the face sample image includes one or more face sample images.
Step S12, carrying out face detection on the at least one face sample image to obtain a to-be-detected region of each image in the at least one face sample image; here, the region to be detected is used to indicate a region of the face image including a human face, which needs to be detected.
Step S13, extracting facial features of the to-be-detected region of each image in the at least one facial sample image to obtain facial feature information of each image in the at least one facial sample image;
step S14, carrying out facial expression recognition on the to-be-detected region of each image in the at least one facial sample image to obtain facial expression information of each image in the at least one facial sample image;
and S15, performing feature fusion on the facial feature information and the facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image.
Through the steps S11 to S15, the detection of the richness characteristics of at least one face sample image (i.e. a group of face sample images) is realized, so that the sample quality of the sample face image used for training the face image replacement model can be evaluated in the subsequent training process of the face image replacement model, and the trained face image replacement model can effectively fit various facial poses, expressions and the like of the target face image used for replacement, thereby improving the face-changing experience of the user.
For example, in order to extract and count the features of the face images of a group of face samples in as many dimensions as possible, first, step S11 obtains a group of face sample images, which are respectively face sample image 1, face sample image 2, face sample image 3, \8230 \ 8230;, face sample image N, where N is the number of selected face samples requiring feature extraction, that is, the number of face sample images; in step S12, performing face detection on the face sample image 1, the face sample image 2, the face sample image 3, 8230, the face sample image N, and obtaining a Region to be detected (ROI) 1 of the face sample image 1, a Region to be detected 2 of the face sample image 2, and a Region to be detected 3, 8230of the face sample image 3, the Region to be detected N of the face sample image N, of the N face sample images; in step S13, face feature extraction is performed on ROI1 of a face sample image 1, ROI2 of a face sample image 2, ROI3, \8230; \ 8230;, ROI (N) of a face sample image N in the N face sample images to obtain face feature information F1 of the face sample image 1, face feature information F2 of the face sample image 2, and face feature information F3, \8230;, face feature information F (N) of the face sample image 3 in the N face sample images; in step S14, facial expression recognition is performed on ROI1 of a face sample image 1, ROI2 of a face sample image 2, ROI3, \8230 \ 8230;, ROI N of the face sample image N in the N face sample images to obtain facial expression information E1 of the face sample image 1, facial expression information E2 of the face sample image 2, facial expression information E3, \8230; \ 8230;, and facial expression information E (N) of the face sample image N in the N face sample images; in step S15, feature fusion is performed on the facial feature information F1 and the facial expression information E1 of the facial sample image 1, the facial feature information F2 and the facial expression information E2 of the facial sample image 2, the facial feature information F3 and the facial expression information E3, \8230ofthe facial sample image 3, and the facial feature information F (N) and the facial expression information E (N) of the facial sample image N to obtain the richness features V of the N facial sample images, so as to detect the richness features of a group of facial sample images, and in the subsequent training process of the facial image replacement model, the sample quality of the sample facial image used for training the facial image replacement model can be evaluated, so that the trained facial image replacement model can effectively fit various facial poses, expressions, and the like of the target facial image used for replacement, thereby improving the face-changing experience of the user.
Next to the above embodiment of the present application, the facial feature information includes facial feature point information and facial angle information, where the step S13 performs facial feature extraction on the to-be-detected region of each image in the at least one facial sample image to obtain the facial feature information of each image in the at least one facial sample image, and specifically includes:
extracting facial feature points of a region to be detected of each image in the at least one facial sample image to obtain facial feature point information of each image in the at least one facial sample image;
carrying out face angle recognition on the face characteristic point information of each image in the at least one face sample image to obtain face angle information of each image in the at least one face sample image;
in step S15, feature fusion is performed on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image, which specifically includes:
and performing feature fusion on the facial feature point information, the facial angle information and the facial expression information of each image in the at least one facial sample image to obtain the richness features of the at least one facial sample image.
For example, if the facial feature information F includes facial feature point information FP and facial angle information FA, then the facial feature extraction performed in step S13 on ROI1 of the face sample image 1, ROI2 of the face sample image 2, and ROI3, 8230of the face sample image 3 among the N face sample images includes: extracting facial feature points of ROI1 of a face sample image 1, ROI2 of a face sample image 2, ROI3 of a face sample image 3, \8230;, ROI (N) of a face sample image N in the N face sample images to obtain facial feature point information FP1 of the face sample image 1, facial feature point information FP2 of the face sample image 2, facial feature point information FP3, \8230; \ 8230;, facial feature point information FP (N) of the face sample image N in the N face sample images, and performing facial angle recognition of ROI1 of the face sample image 1, ROI2 of the face sample image 2, ROI3 of the face sample image 3, 8230; \\\\\ face angle information FA (N) of the face sample image 3, facial angle information FA (FA); facial angle information FA, FA3 of the face sample image N in the N to obtain facial feature point information FA 82302, facial feature point information of the face sample image 1, ROI2, face sample image 2, face angle information FA (N); when the feature fusion is performed on each image in the N face sample images in step S15, feature fusion is performed on the facial feature point information FP1, the face angle information FA1 and the facial expression information E1 of the face sample image 1, the facial feature information FP2, the face angle information FA2 and the facial expression information E2 of the face sample image 2, the facial feature information FP3, the face angle information FA3 and the facial expression information E3 of the face sample image 3, the' \8230 \ 8230 \ the facial feature information FP (N) of the face sample image N, the face angle information FA (N) and the facial expression information E (N) to obtain the richness features V of the N face sample images, so that the richness features V of the N face sample images obtained by feature fusion of the facial feature point information, the face angle information and the facial expression information of each image in the N face sample images are richer, and the richness measures of the group of samples provide effective evaluation according to the effective evaluation of the face quality of the face sample images.
Next to the above embodiment of the present application, the extracting facial feature points of the to-be-detected region of each image in the at least one facial sample image in step S13 to obtain facial feature point information of each image in the at least one facial sample image specifically includes:
acquiring a key point positioning model for detecting facial features, wherein the key point positioning model is obtained by training a local binarization feature algorithm and a random forest algorithm;
and extracting facial feature points of the to-be-detected region of each image in the at least one facial sample image through the key point positioning model to obtain facial feature point information of each image in the at least one facial sample image.
It should be noted that, the key point location model may use a local binarization-based feature algorithm to perform key point feature extraction on facial images of a set of face samples used for training the key point location model, and then perform key point regression on key points extracted from the facial images of the set of face samples used for training the key point location model by using the random forest algorithm, so as to train and obtain a key point location model for locating key points of facial features, so that key point feature extraction can be performed on facial images of a face whose facial features need to be extracted based on the key point location model in the following.
For example, when facial feature point extraction is performed on the region to be detected of each image in the N face sample images in step S13, a pre-trained key point localization model (1) for detecting facial features is obtained, and then facial feature point extraction is performed on ROI1 of the face sample image 1, ROI2 of the face sample image 2, ROI3, 8230of the face sample image 3, ROI (N) of the face sample image N by the key point localization model (1) to obtain facial feature point information FP1 of the face sample image 1, facial feature point information FP2 of the face sample image 2, facial feature point information FP3, 82308230of the face sample image 3, facial feature point information FP (N) of the face sample image N, so as to perform facial feature point extraction on the region to be detected of each image in the N face sample images by the pre-trained key point localization model (1).
Next to the above embodiment of the present application, the performing facial angle recognition on the facial feature point information of each image in the at least one facial sample image in step S13 to obtain the facial angle information of each image in the at least one facial sample image specifically includes:
acquiring a face angle recognition model for recognizing a face angle; the face angle recognition model is obtained by performing face angle recognition training on face images of a group of face samples, so that face angle recognition can be performed on the face image of the face from which the face angle needs to be extracted based on the face angle recognition model trained in advance.
And carrying out face angle recognition on the face feature point information of each image in the at least one face sample image through the face angle recognition model to obtain the face angle information of each image in the at least one face sample image.
For example, when the face angle recognition is performed on the region to be detected of each of the N face sample images in step S13, a face angle recognition model (2) for recognizing a face angle is obtained in advance, and then the face angle recognition model (2) is used to perform face angle recognition on the ROI1 of the face sample image 1, the ROI2 of the face sample image 2, the ROI3, the \8230ofthe face sample image 3, the ROI (N) of the face sample image N in the N face sample images to obtain the face angle information FA1 of the face sample image 1, the face angle information FA2 of the face sample image 2, the face angle information FA3, the \8230ofthe face sample image 3, the face angle information FA (N) of the face sample image N in the N face sample images, so as to perform the angle recognition on the region to be detected of each of the N face sample images through the face angle recognition model (2) trained in advance.
Next to the above embodiment of the present application, the step S14 of performing facial expression recognition on the to-be-detected region of each image in the at least one facial sample image to obtain facial expression information of each image in the at least one facial sample image includes:
acquiring a facial expression recognition model for recognizing facial expressions, wherein the facial expression recognition model is obtained by convolutional neural network training based on deep learning;
and carrying out facial expression recognition on the to-be-detected region of each image in the at least one facial sample image through the facial expression recognition model to obtain facial expression information of each image in the at least one facial sample image.
It should be noted that the facial expression recognition model may adopt a convolutional neural network based on deep learning and a classification model framework thereof to perform facial expression recognition and training on facial images of a group of face samples used for training the facial expression recognition model, so as to obtain a facial expression recognition model used for recognizing and classifying facial expressions. In the process of training the facial expression recognition model, firstly, dividing facial expressions into: the method comprises the following steps of 7 major classes of anger (Angry), aversion (dispust), fear (Fear), happiness (Happy), sadness (Sad), surprise (Surprise) and normality (Neutral), wherein when a convolutional neural network based on deep learning and a classification model framework thereof are adopted to identify and train the facial expressions of facial images of a group of facial samples used for training the facial expression identification model, the facial expressions of the facial images of different facial samples can be trained, so that the facial images of a face needing to identify the facial expressions can be identified based on the facial expression identification model.
For example, when the step S14 performs facial expression recognition on the region to be detected of each of the N facial sample images, a facial expression recognition model (3) trained in advance for recognizing facial expressions is obtained, and then facial expression recognition is performed on the ROI1 of the facial sample image 1, the ROI2 of the facial sample image 2, the ROI3, the \8230ofthe facial sample image 3, and the ROI N of the facial sample image N in the N facial sample images by the facial expression recognition model (3), so as to obtain the facial expression information E1 of the facial sample image 1, the facial expression information E2 of the facial sample image 2, and the facial expression information E3, the \8230ofthe facial sample image 3, the facial expression information E (N) of the facial sample image N in the N facial sample images, so as to recognize the region to be detected of each of the N facial sample images by the facial expression recognition model trained in advance.
Next to the above embodiment of the present application, the step S15 performs feature fusion on the facial feature information and the facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image, and specifically includes:
fusing facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of each image in the at least one facial sample image;
and obtaining the richness characteristic of the at least one face sample image according to the richness characteristic of each image in the at least one face sample image.
For example, in step S15, facial feature information F1 and facial expression information E1 of a face sample image 1 of the N face sample images, facial feature information F2 and facial expression information E2 of the face sample image 2, facial feature information F3 and facial expression information E3, \\8230 \ 8230;, facial feature information F (N) and facial expression information E (N) of the face sample image N are feature-fused, and richness feature V1 of the face sample image 1, richness feature V2 of the face sample image 2, richness feature V3, \8230;, richness feature V (N) of the face sample image N, and richness feature V1 of the face sample image 1, degree feature V2 of the face sample image 2, richness feature V3 of the face sample image 3, face sample image N, are feature-fused, and the final face sample image is obtained by the feature fusion of the face sample images.
Next to the foregoing embodiment of the present application, the step S14 performs feature fusion on the facial feature information and facial expression information of each image in the at least one face sample image to obtain the richness feature of the at least one face sample image, and specifically includes:
obtaining facial feature information of the at least one facial sample image according to the facial feature information of each image in the at least one facial sample image;
obtaining facial expression information of the at least one face sample image according to facial expression information of each image in the at least one face sample image;
and fusing the facial feature information and facial expression information of the at least one facial sample image to obtain the richness feature of the at least one facial sample image.
For example, in step S15, the face feature information F1 of the face sample image 1, the face feature information F2 of the face sample image 2, the face feature information F3 of the face sample image 3, \8230 \ 8230;, and the face feature information F (N) of the face sample image N are counted and fused to obtain the face feature information F (integrated) of the N face sample images; counting and fusing facial expression information E1 of a face sample image 1, facial expression information E2 of a face sample image 2, facial expression information E3, 8230of a face sample image 3, and facial expression information E (N) of a face sample image N in the N face sample images to obtain facial expression information E (comprehensive) of the N face sample images; and then, performing feature fusion on facial feature information F (synthesis) and facial expression information E (synthesis) of the N facial sample images to obtain richness features V of the N facial sample images, so as to realize fusion of the richness features of the N facial sample images through the facial feature information F (synthesis) and the facial expression information E (synthesis) of the N facial sample images.
Next to all the above embodiments of the present application, in the method for detecting richness features of a facial image provided in an embodiment of the present application, before performing facial feature extraction and facial expression recognition on a region to be detected of a face sample image, the method further includes:
carrying out validity judgment on the corresponding face sample image based on the to-be-detected region,
and if the face sample image corresponding to the area to be detected is an effective face sample image, performing facial feature extraction and facial expression recognition on the area to be detected of the face sample image.
For example, in the step S12, it is implemented that for the N face sample images: after face detection of the face sample image N, obtaining ROI1 of the face sample image 1, ROI2 of the face sample image 2 and ROI3 of the face sample image 3, 8230of the face sample image N; before facial feature extraction and facial expression recognition are performed on a to-be-detected region of each of the N facial sample images, validity judgment needs to be performed on the corresponding facial sample image based on the to-be-detected region, for example, validity judgment is performed on the corresponding facial sample image 1 based on the ROI1, validity judgment is performed on the corresponding facial sample image 2 based on the ROI2, validity judgment is performed on the corresponding facial sample image 3 based on the ROI3, and validity judgment is performed on the corresponding facial sample image N based on the ROI (N), if there are one or more facial sample images corresponding to the to-be-detected regions in the N to-be-detected regions respectively as valid facial sample images, then there are one or more facial sample images corresponding to each of the to-be-detected regions in the N to-be-detected regions respectively as valid facial sample images, and facial feature extraction and facial expression recognition can be performed on the facial sample images corresponding to each of the one or more to-be-detected regions respectively in the steps S13 and S14, so as to ensure that facial feature extraction and facial expression recognition are performed subsequently on the valid facial sample images of the facial images.
Next, in the above embodiment of the present application, the determining the validity of the corresponding face sample image based on the to-be-detected region includes:
acquiring content information, pixel information and size information of the area to be detected;
and judging the effectiveness of the corresponding face sample image based on the content information, the pixel information and the size information of the region to be detected.
For example, in order to facilitate the judgment of the validity of each dimension of the region to be detected of the face sample image, firstly, the ROI1 of the face sample image 1, the ROI2 of the face sample image 2 and the ROI3, \8230 \ 8230;, the content information, the pixel information and the size information of each region to be detected in the ROI (N) of the face sample image N are obtained; and then judging whether the content information, the pixel information and the size information of each region to be detected meet the image validity conditions, such as whether the content information contains facial features, whether the pixel information meets a preset pixel point threshold value and whether the size information meets a preset size threshold value to be detected, so as to realize validity judgment of each image in the N facial sample images, avoid influencing subsequent facial feature extraction and facial expression recognition due to incomplete content information, wrong pixels, small size and the like, and further guarantee the validity of the facial sample images for subsequently performing facial feature extraction and facial expression recognition.
In an actual application scenario of the face image replacement method provided by the present application, as shown in fig. 2, when a group of face samples needs to be determined for richness characteristics, a group of face samples and face sample images thereof are obtained first: the method comprises the steps of obtaining a face sample image 1, a face sample image 2, a face sample image 3, 8230, a face sample image N, and then carrying out face detection on the obtained face sample images of the N face samples in a face detection module to obtain an ROI1 of the face sample image 1, an ROI2 of the face sample image 2 and an ROI3, 8230of the face sample image 3 in the N face sample images, and an ROI (N) of the face sample image N; then, judging whether each face sample image is a face image corresponding to an effective face or not based on the to-be-detected region, if not, carrying out any processing on the face image corresponding to the ineffective face, and if so, carrying out subsequent face processing on the face image corresponding to the effective face in the N face sample images; if the N face sample images all belong to face images corresponding to valid human faces, performing face characteristic point extraction on ROI1 of a face sample image 1, ROI2 of the face sample image 2, ROI3 of the face sample image 3, \8230 \ 8230;, ROI (N) of the face sample image N to obtain face characteristic point information FP1 of the face sample image 1, face characteristic point information FP2 of the face sample image 2, face characteristic point information FP3, \8230;, face characteristic point information FP (N) of the face sample image N, and performing face angle identification on ROI (N) of the face sample image 1, face characteristic point information FP2 of the face sample image 2, face characteristic point information ROI3 of the face sample image 3, face characteristic point information 8230; \\\\ 8230;, ROI (N) of the face sample image N based on the face characteristic point information to obtain face angle identification of the face sample images FA FA1, face sample image 2, face characteristic point information FA2 of the face sample image 2, face sample image 3, face sample image FA 2; of course, in order to take the facial expression into consideration, the facial expression recognition is also required to be performed on the ROI1 of the face sample image 1, the ROI2 of the face sample image 2, the ROI3, the \8230;, the ROI N of the face sample image N in the N face sample images, to obtain the facial expression information E1 of the face sample image 1, the facial expression information E2 of the face sample image 2, the facial expression information E3, the \8230;, the facial expression information E (N) of the face sample image N in the N face sample images; then, according to the facial feature point information, the facial angle information and the facial expression information, the richness feature V of the N facial sample images can be obtained by fusing any features, for example, the facial feature information F1 and the facial expression information E1 of the facial sample image 1, the facial feature information F2 and the facial expression information E2 of the facial sample image 2, the facial feature information F3 and the facial expression information E3 of the facial sample image 3, the \8230, the facial feature information F (N) and the facial expression information E (N) of the facial sample image N are subjected to feature fusion to obtain the richness feature V of the N facial sample images, so as to realize the detection of the richness feature of a group of facial sample images; for example, feature fusion may be performed on the facial feature point information FP1, the facial angle information FA1, and the facial expression information E1 of the facial sample image 1, the facial feature information FP2, the facial angle information FA2, and the facial expression information E2 of the facial sample image 2, the facial feature information FP3, the facial angle information FA3, and the facial expression information E3, \\ 8230; (823030), the facial feature information FP (N), the facial angle information FA (N), and the facial expression information E (N) of the facial sample image N to obtain the richness features V of the N facial sample images, so that the richness features V of the N facial sample images obtained by feature fusion of the facial feature point information, the facial angle information, and the facial expression information of each image in the N facial sample images are richer, and the richness of the set of subsequent face samples provides an effective evaluation basis, thereby realizing effective evaluation of the quality of the facial sample images.
The application further provides a device for detecting richness characteristics of face images in another embodiment, wherein the device comprises:
one or more processors;
a non-volatile storage medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a method for detecting richness features of facial images as described above.
The application further provides a device for detecting richness characteristics of face images in another embodiment, wherein the device comprises:
one or more processors;
a non-volatile storage medium to store one or more computer-readable instructions that,
when executed by the one or more processors, cause the one or more processors to implement a method for detecting richness features of facial images as described above.
Here, the detailed contents of each embodiment of the device for detecting the richness feature of the face image may specifically refer to the corresponding part of the embodiment of the method for detecting the richness feature of the face image provided in the foregoing embodiment, and are not described herein again.
In summary, the present application provides a method for obtaining at least one face sample image; performing face detection on the at least one face sample image to obtain a region to be detected of each image in the at least one face sample image; extracting facial features of the to-be-detected region of each image in the at least one facial sample image to obtain facial feature information of each image in the at least one facial sample image; performing facial expression recognition on the to-be-detected region of each image in the at least one facial sample image to obtain facial expression information of each image in the at least one facial sample image; feature fusion is carried out on facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image, detection of the richness feature of the at least one facial sample image is achieved, and therefore the sample quality of the sample facial image used for training the facial image replacement model can be evaluated in the following training process of the facial image replacement model, the trained facial image replacement model can be made to effectively fit various facial poses, expressions and the like of a target facial image used for replacement, and therefore face changing experience of a user is improved.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. As such, the software programs (including associated data structures) of the present application can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Additionally, some portions of the present application may be applied as a computer program product, such as computer program instructions, which, when executed by a computer, may invoke or provide the method and/or solution according to the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not to denote any particular order.

Claims (10)

1. A method for detecting richness characteristics of face image,
the method comprises the following steps:
acquiring at least one face sample image;
performing face detection on the at least one face sample image to obtain a region to be detected of each image in the at least one face sample image;
extracting facial features of the to-be-detected region of each image in the at least one facial sample image to obtain facial feature information of each image in the at least one facial sample image;
performing facial expression recognition on the to-be-detected region of each image in the at least one facial sample image to obtain facial expression information of each image in the at least one facial sample image;
and performing feature fusion on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image, wherein the richness feature of the facial sample image is used for evaluating the sample quality of the sample facial image used for training the facial image replacement model, so that the trained facial image replacement model can be effectively attached to various facial poses and expressions of the target facial image used for replacement.
2. The method of claim 1, wherein,
the facial feature information includes facial feature point information and facial angle information, wherein,
the extracting facial features of the to-be-detected region of each image in the at least one facial sample image to obtain the facial feature information of each image in the at least one facial sample image includes:
extracting facial feature points of a region to be detected of each image in the at least one facial sample image to obtain facial feature point information of each image in the at least one facial sample image;
carrying out face angle recognition on the face characteristic point information of each image in the at least one face sample image to obtain face angle information of each image in the at least one face sample image;
the feature fusion is performed on the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness features of the at least one facial sample image, and the feature fusion includes:
and performing feature fusion on the facial feature point information, the facial angle information and the facial expression information of each image in the at least one facial sample image to obtain the richness features of the at least one facial sample image.
3. The method of claim 2, wherein,
the extracting facial feature points of the to-be-detected region of each image in the at least one facial sample image to obtain facial feature point information of each image in the at least one facial sample image includes:
acquiring a key point positioning model for detecting facial features, wherein the key point positioning model is obtained by training a local binarization feature algorithm and a random forest algorithm;
and extracting facial feature points of the to-be-detected region of each image in the at least one facial sample image through the key point positioning model to obtain facial feature point information of each image in the at least one facial sample image.
4. The method of claim 3, wherein,
the obtaining the face angle information of each image in the at least one face sample image by performing face angle recognition on the face feature point information of each image in the at least one face sample image includes:
acquiring a face angle recognition model for recognizing a face angle;
and carrying out face angle recognition on the face feature point information of each image in the at least one face sample image through the face angle recognition model to obtain the face angle information of each image in the at least one face sample image.
5. The method of claim 1, wherein,
the obtaining of facial expression information of each image in the at least one face sample image by performing facial expression recognition on the to-be-detected region of each image in the at least one face sample image includes:
acquiring a facial expression recognition model for recognizing facial expressions, wherein the facial expression recognition model is obtained by convolutional neural network training based on deep learning;
and carrying out facial expression recognition on the to-be-detected region of each image in the at least one facial sample image through the facial expression recognition model to obtain facial expression information of each image in the at least one facial sample image.
6. The method of claim 1, wherein,
the feature fusion of the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness features of the at least one facial sample image comprises:
fusing facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of each image in the at least one facial sample image;
and obtaining the richness characteristic of the at least one face sample image according to the richness characteristic of each image in the at least one face sample image.
7. The method of claim 1, wherein,
the feature fusion of the facial feature information and facial expression information of each image in the at least one facial sample image to obtain the richness feature of the at least one facial sample image comprises:
obtaining facial feature information of the at least one facial sample image according to the facial feature information of each image in the at least one facial sample image;
obtaining facial expression information of the at least one face sample image according to facial expression information of each image in the at least one face sample image;
and fusing the facial feature information and the facial expression information of the at least one facial sample image to obtain the richness feature of the at least one facial sample image.
8. The method of any one of claims 1 to 7,
the method further comprises the following steps:
carrying out validity judgment on the corresponding face sample image based on the to-be-detected region,
and if the face sample image corresponding to the area to be detected is an effective face sample image, performing facial feature extraction and facial expression recognition on the area to be detected of the face sample image.
9. The method of claim 8, wherein,
the judging the effectiveness of the corresponding face sample image based on the to-be-detected region comprises the following steps:
acquiring content information, pixel information and size information of the area to be detected;
and judging the effectiveness of the corresponding face sample image based on the content information, the pixel information and the size information of the area to be detected.
10. A face image richness feature detection device, wherein,
the apparatus comprises:
one or more processors;
a non-volatile storage medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
CN201910872851.8A 2019-09-16 2019-09-16 Method and device for detecting richness characteristics of face image Active CN110728193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910872851.8A CN110728193B (en) 2019-09-16 2019-09-16 Method and device for detecting richness characteristics of face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910872851.8A CN110728193B (en) 2019-09-16 2019-09-16 Method and device for detecting richness characteristics of face image

Publications (2)

Publication Number Publication Date
CN110728193A CN110728193A (en) 2020-01-24
CN110728193B true CN110728193B (en) 2022-10-04

Family

ID=69219083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910872851.8A Active CN110728193B (en) 2019-09-16 2019-09-16 Method and device for detecting richness characteristics of face image

Country Status (1)

Country Link
CN (1) CN110728193B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507241A (en) * 2020-04-14 2020-08-07 四川聚阳科技集团有限公司 Lightweight network classroom expression monitoring method
CN111553267B (en) * 2020-04-27 2023-12-01 腾讯科技(深圳)有限公司 Image processing method, image processing model training method and device
CN112749657A (en) * 2021-01-07 2021-05-04 北京码牛科技有限公司 House renting management method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180234A (en) * 2017-06-01 2017-09-19 四川新网银行股份有限公司 The credit risk forecast method extracted based on expression recognition and face characteristic

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184249B (en) * 2015-08-28 2017-07-18 百度在线网络技术(北京)有限公司 Method and apparatus for face image processing
CN107316020B (en) * 2017-06-26 2020-05-08 司马大大(北京)智能系统有限公司 Face replacement method and device and electronic equipment
CN110096925B (en) * 2018-01-30 2021-05-14 普天信息技术有限公司 Enhancement method, acquisition method and device of facial expression image
CN108363973B (en) * 2018-02-07 2022-03-25 电子科技大学 Unconstrained 3D expression migration method
CN108491835B (en) * 2018-06-12 2021-11-30 常州大学 Two-channel convolutional neural network for facial expression recognition
CN109614910B (en) * 2018-12-04 2020-11-20 青岛小鸟看看科技有限公司 Face recognition method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180234A (en) * 2017-06-01 2017-09-19 四川新网银行股份有限公司 The credit risk forecast method extracted based on expression recognition and face characteristic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于特征点的人脸姿态估计与识别系统研究;段培聪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180415;正文第2.3.2节,第3.2.3节 *

Also Published As

Publication number Publication date
CN110728193A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110827247B (en) Label identification method and device
CN107239666B (en) Method and system for desensitizing medical image data
CN110689037B (en) Method and system for automatic object annotation using deep networks
CN110728193B (en) Method and device for detecting richness characteristics of face image
CN110175609B (en) Interface element detection method, device and equipment
CN110348392B (en) Vehicle matching method and device
CN111291661B (en) Method and equipment for identifying text content of icon in screen
CN112085022B (en) Method, system and equipment for recognizing characters
US20120057745A9 (en) Detection of objects using range information
CN113591746B (en) Document table structure detection method and device
CN112487848A (en) Character recognition method and terminal equipment
CN110827246A (en) Electronic equipment frame appearance flaw detection method and equipment
CN113205047A (en) Drug name identification method and device, computer equipment and storage medium
CN111507332A (en) Vehicle VIN code detection method and equipment
CN111652144A (en) Topic segmentation method, device, equipment and medium based on target region fusion
CN112434555A (en) Key value pair region identification method and device, storage medium and electronic equipment
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN111626244B (en) Image recognition method, device, electronic equipment and medium
CN112907206A (en) Service auditing method, device and equipment based on video object identification
CN117351505A (en) Information code identification method, device, equipment and storage medium
CN111860122A (en) Method and system for recognizing reading comprehensive behaviors in real scene
CN104850819B (en) Information processing method and electronic equipment
CN112348112B (en) Training method and training device for image recognition model and terminal equipment
CN115546811A (en) Method, device and equipment for identifying seal and storage medium
CN112528079A (en) System detection method, apparatus, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant