CN111695431A - Face recognition method, face recognition device, terminal equipment and storage medium - Google Patents

Face recognition method, face recognition device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111695431A
CN111695431A CN202010424132.2A CN202010424132A CN111695431A CN 111695431 A CN111695431 A CN 111695431A CN 202010424132 A CN202010424132 A CN 202010424132A CN 111695431 A CN111695431 A CN 111695431A
Authority
CN
China
Prior art keywords
face
target
mask
key points
chartlet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010424132.2A
Other languages
Chinese (zh)
Inventor
杨泽霖
杨坚
涂前彦
刘伟生
薛利荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Heils Zhongcheng Technology Co ltd
Original Assignee
Shenzhen Heils Zhongcheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Heils Zhongcheng Technology Co ltd filed Critical Shenzhen Heils Zhongcheng Technology Co ltd
Priority to CN202010424132.2A priority Critical patent/CN111695431A/en
Publication of CN111695431A publication Critical patent/CN111695431A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a face recognition method, a face recognition device, terminal equipment and a storage medium, wherein the face recognition method comprises the following steps: acquiring a face image to be pasted; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.

Description

Face recognition method, face recognition device, terminal equipment and storage medium
Technical Field
The invention relates to the technical field of computer image recognition, in particular to a face recognition method, a face recognition device, terminal equipment and a storage medium.
Background
In some environments susceptible to environment or virus, a mask needs to be worn to prevent the transmission of pollutants or viruses, which may limit the face recognition.
In the prior art, samples required by face recognition are face images without any shielding, and if the face images of a mask are worn, the accuracy of the recognition is not high by adopting an original face recognition algorithm.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a face recognition method, apparatus, terminal device and storage medium that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention provides a face recognition method, including:
acquiring a face image to be pasted;
calculating target position information and human face target attitude angle information of human face key points in the human face image;
determining a target mask chartlet corresponding to the face target attitude angle information according to the face target attitude angle information and a pre-established mask chartlet library;
aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face;
pasting the target mask picture on the face image to be pasted to obtain a target face image;
and carrying out face recognition on the target face image.
Optionally, the calculating target position information and face target pose angle information of the face key points in the face image includes:
and processing the face image through a preset detection algorithm to obtain target position information of key points of the face and face target attitude angle information, wherein the face target attitude angle information comprises the size and the direction of an attitude angle.
Optionally, the pre-established mask gallery is established as follows:
the method comprises the steps of collecting face images wearing different types of masks, wherein the face images wearing different types of masks are shot at different angles and different illumination intensities;
acquiring mask contour information and key point information in the face images wearing different types of masks;
according to the mask contour information, matting the face images of different types of masks to obtain the mask paste library, wherein the mask paste library is in an RGBA format.
Optionally, the obtaining the face image to be pasted includes:
acquiring a plurality of face images, wherein the face images do not contain a mask;
and aligning the plurality of face images to obtain the face image to be pasted.
Optionally, the determining a target mask map corresponding to the face target pose angle information according to the face target pose angle information and a pre-established mask map library includes:
and searching a mask chartlet which is closest to the Euclidean distance of the face target attitude angle information in a pre-established mask chartlet library according to the face target attitude angle information.
Optionally, the aligning the key points of the target mask map with the key points of the face according to the position information of the key points of the target mask map and the target position information of the key points of the face comprises:
performing similarity transformation on the position information of the key points of the target mask chartlet according to the target position information of the key points of the human face;
acquiring anchor points required by the mapping;
and aligning the key points of the target mask chartlet with the key points of the human face according to the anchor points.
In a second aspect, an embodiment of the present invention provides a face recognition apparatus, including:
the acquisition module is used for acquiring a face image to be pasted;
the calculation module is used for calculating target position information and human face target attitude angle information of human face key points in the human face image;
the determining module is used for determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library;
the alignment module is used for aligning the key points of the target mask map with the key points of the human face according to the position information of the key points of the target mask map and the target position information of the key points of the human face;
the mapping module is used for mapping the target mask on the face image to be mapped to obtain a target face image;
and the recognition module is used for carrying out face recognition on the target face image.
Optionally, the calculation module is specifically configured to:
and processing the face image through a preset detection algorithm to obtain target position information of the key points of the face and face target attitude angle information, wherein the face target attitude angle information comprises the size and the direction of an attitude angle.
Optionally, the apparatus further comprises a model building module, configured to:
the method comprises the steps of collecting face images wearing different types of masks, wherein the face images wearing different types of masks are shot at different angles and different illumination intensities;
acquiring mask contour information and key point information in the face images wearing different types of masks;
according to the mask contour information, matting the face images of different types of masks to obtain the mask paste library, wherein the mask paste library is in an RGBA format.
Optionally, the obtaining module is specifically configured to:
acquiring a plurality of face images, wherein the face images do not contain a mask;
and aligning the plurality of face images to obtain the face image to be pasted.
Optionally, the determining module is configured to:
and searching a mask chartlet which is closest to the Euclidean distance of the face target attitude angle information in a pre-established mask chartlet library according to the face target attitude angle information.
Optionally, the alignment module is specifically configured to:
performing similarity transformation on the position information of the key points of the target mask chartlet according to the target position information of the key points of the human face;
acquiring anchor points required by the mapping;
and aligning the key points of the target mask chartlet with the key points of the human face according to the anchor points.
In a third aspect, an embodiment of the present invention provides a terminal device, including: at least one processor and memory;
the memory stores a computer program; the at least one processor executes the computer program stored in the memory to implement the face recognition method provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed, the computer program implements the face recognition method provided in the first aspect.
The embodiment of the invention has the following advantages:
the embodiment of the invention provides a face recognition method, a face recognition device, a terminal device and a storage medium, wherein the face recognition method comprises the following steps: acquiring a face image to be pasted; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.
Drawings
FIG. 1 is a flow chart of steps of an embodiment of a face recognition method of the present invention;
FIG. 2 is a schematic diagram of face image sample acquisition in accordance with the present invention;
FIG. 3 is a schematic view of the mask of the present invention showing key points and outlines;
fig. 4 is a schematic view of a mask library of the present invention;
FIG. 5 is a schematic view of the mask of the present invention aligned with a human face for charting;
FIG. 6 is a flow chart of steps of yet another embodiment of a face recognition method of the present invention;
FIG. 7 is a block diagram of an embodiment of a face recognition apparatus according to the present invention;
fig. 8 is a schematic structural diagram of a terminal device of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a face recognition method according to the present invention is shown, which may specifically include the following steps:
s101, obtaining a face image to be pasted;
specifically, the face image set may be not only acquired by itself, but also crawled from the network, or may be an open-source published data set, such as LFW, MS-Celeb-1M, and the like.
The self collection can be to collect the image of the face without wearing the mask through the camera and upload the image to the server through the camera, so that the server acquires a plurality of face images, wherein the face images do not comprise the mask, and the plurality of face images are aligned to obtain the face images to be pasted.
S102, calculating target position information and face target attitude angle information of face key points in a face image;
specifically, the server acquires the face image, calculates the facial features of the face image according to the pixels, skin color and the like of the acquired face image, and determines the facial features of the face image, wherein the facial features of the face image may be feature points of the face shown in the face image, and the feature vector describing the face shown in the face image is obtained through the relative distance between the key points. For example, the feature vector is determined according to the position and width of eyes, nose, mouth and eyebrows, the thickness and shape of eyebrows and other feature relations in the face image. The above-mentioned determining the facial features of the face image may be determining the facial features of the face image for RGB pixels of the face image.
The method comprises the steps of obtaining target position information of a human face key point and human face target attitude angle information in a human face image to be chartled by a preset algorithm, wherein the human face target attitude angle information comprises the size and the direction of an attitude angle, and specifically, the human face attitude comprises a front face, a yaw and a pitch, but the method is not limited to the front face, the yaw and the pitch. Wherein, the yaw is divided into left yaw and right yaw, and the pitching is divided into upward pitching and downward pitching.
The face pose is characterized by angles, including three yaw angles, e.g., yaw, pitch, roll.
S103, determining a target mask chartlet corresponding to the face target attitude angle information according to the face target attitude angle information and a pre-established mask chartlet library;
specifically, a mask paste library including different masks is pre-established on a server, and after the server acquires the face target attitude angle information of the face image, a target mask paste corresponding to the face target attitude angle information is searched in the pre-established mask paste library.
S104, aligning the key points of the target mask map with the key points of the human face according to the position information of the key points of the target mask map and the target position information of the key points of the human face;
s105, pasting the target mask on the face image to be pasted to obtain a target face image;
and S106, carrying out face recognition on the target face image.
The embodiment of the invention provides a face recognition method, which comprises the steps of obtaining a face image to be pasted; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.
The method provided by the above embodiment is further described in an additional embodiment of the present invention.
For clarity of description of the scheme, the process of establishing the pre-established mask map library will be explained in detail in the following examples, including:
step 01, collecting face images wearing different types of masks, wherein the face images wearing different types of masks are shot at different angles and different illumination intensities;
specifically, as shown in fig. 2, the schematic diagram of the face image sample collection of the present invention is that, in order to make the mask fit the face more tightly, the angles of the mask and the face must be correspondingly constrained. The best way is to set up a test bed to collect the mask data at a fixed angle.
The camera can be erected from different angles to shoot, two types of pictures can be shot at the same angle, one type of the pictures does not wear the mask, and the other type of the pictures wears masks with different colors and styles. The main purpose of taking the two types of photos is that the real angle and the angle detected by the human face gesture detection later are possibly deviated, and the gesture data obtained by the same gesture estimation algorithm are similar to homologous data, so that the actual mapping operation can be more matched.
In addition, take the gauze mask after, the people's face is sheltered from, and the gesture angle detection of people's face can receive some influence in fact, if with that not take the meeting of gauze mask to let the gesture angle draw more accurate.
The laboratory bench just can carry out the collection work of beginning after finishing building, prepares the gauze mask of 10 different colours and style, finds 5 volunteers, to every angle of camera, and 1 data of not wearing the gauze mask are gathered to every volunteer to and 10 data of wearing different gauze masks, because this process is that the camera shoots at the change position, consequently the volunteer does not need the shift position, and this data angle that also can let the experiment gather is more true. The angle of each movement of the camera can be controlled, the smaller the angle of movement. The higher the accuracy of the alignment, the more-45, -30, -15, 0, 15, 30, 45 is selected. In order to guarantee the complexity of data, also control illumination, through the lamp area on the wall, the power of control light has set up 5 light power grades, and the simulation daily environment and then guarantee the variety of data. Finally, 1925 final images were acquired.
02, acquiring mask contour information and key point information in the face images wearing different types of masks;
specifically, fig. 3 is a schematic diagram of labeling key points and contours of the mask of the present invention, and as shown in fig. 3, after data is acquired, the acquired data needs to be labeled, three types of information are labeled, the obtained face pose angle information, the position information of 5 key points in the face, and the information of the mask contour are labeled. The information of the attitude angle is used for matching the face with the closest angle, the key points are used for accurate chartlet, and the mask outline is used for scratching out the key area of the mask.
A mask gallery is obtained based on the contour information of the mask, as shown in fig. 4, and fig. 4 is a schematic view of the mask gallery of the present invention.
Specifically, the calculating target position information and face target pose angle information of the face key points in the face image includes:
and processing the face image through a preset detection algorithm to obtain target position information of the key points of the face and face target attitude angle information, wherein the face target attitude angle information comprises the size and the direction of an attitude angle.
Specifically, the preset detection algorithm comprises an ASM algorithm or an algorithm of simultaneous regression of the pose and the key points, the ASM algorithm is adopted to process the face image to obtain the target position information and the face target pose angle information of the key points of the face, the angle information obtained through the key points solvePn is adopted to process the face image by adopting the algorithm of simultaneous regression of the pose and the key points in order to obtain the face target pose angle information with higher precision.
Specifically, asm (active Shape model) algorithm, a key point detection method. ASM is a method based on the extraction of a Point Distribution Model (PDM). In PDM, the geometry of objects with similar shapes, such as human faces, human hands, hearts, lungs, etc., can be represented by serially connecting the coordinates of several key feature points (landworks) to form a shape vector.
Like most statistical learning methods, ASM also includes two parts, namely train and test, namely shape modeling build and shape matching fit. The algorithm is simple in nature and can be used for real-time detection.
The method comprises the following concrete steps:
manually calibrating a training set- > aligning and constructing a shape model- > searching and matching;
in order to create an ASM, a set of N face images (including different expressions and poses of multiple persons) labeled with N feature points is required as training data. The feature points can be marked on the outer contour of the face and the edge of the organ, and it should be noted that the sequence of the respective marked points needs to be consistent among the respective photographs in the training set.
Obtaining a feature point set, which can be regarded as a 2 n-dimensional vector, where n represents the number of feature points:
{(x1,y1),(x2,y2),...,(xn,yn)},x=(x1,...,xn,y1,...,yn)T
training the model: the images need to be aligned first:
to study the shape variations of the training images, the images should be aligned first by comparing corresponding points in different shapes.
Alignment refers to the process of rotating, scaling, and translating other shapes as close as possible to a reference shape, with the reference shape.
First, a reference image is selected. Other pictures in the training set are transformed to be as close to the reference image as possible, and the specific change process can be represented by a scaling amplitude parameter s, a rotation parameter and a translation parameter matrix t.
The proximity is measured mathematically, often by the magnitude of the euclidean distance, as close as possible to the reference shape.
Secondly, constructing local features;
specifically, PCA processing is performed on the aligned shape features. Next, local features are constructed for each keypoint. The goal is that each keypoint can find a new location during each iterative search. Local features are typically characterized by gradients to prevent illumination variations. Some methods extract along the normal direction of the edge, and some methods extract in a rectangular region near the key point.
Thirdly, memorability shape search, first: calculating the positions of eyes (or eyes and mouth), making simple scale and rotation changes, and aligning the face; then, searching near each aligned point, and matching each local key point (usually adopting the Mahalanobis distance) to obtain a preliminary shape; then correcting the matching result by using the average human face (shape model); iterate until convergence.
The algorithm is adopted to carry out key definite calculation, and ordered characteristic points can be obtained; the adjustment of the parameters can be limited based on training data to limit the change in shape to a reasonable range.
Step 03, according to the mask contour information, matting the face images of the masks worn by different types to obtain the mask pasting library, wherein the mask pasting library is in an RGBA format.
Specifically, RGBA is a color space representing Red (Red) Green (Green) Blue (Blue) and Alpha, that is, transparency/opacity. Although it is sometimes described as a color space, it is actually just the RGB model with the additional information added.
Optionally, step S102 includes:
and processing the face image through a preset detection algorithm to obtain target position information of the key points of the face and face target attitude angle information, wherein the face target attitude angle information comprises the size and the direction of an attitude angle.
Optionally, step S101 specifically includes:
acquiring a plurality of face images, wherein the face images do not contain a mask;
and aligning the plurality of face images to obtain the face image to be pasted.
Optionally, the step S103 includes:
and searching a mask chartlet which is closest to the Euclidean distance of the face target attitude angle information in a pre-established mask chartlet library according to the face target attitude angle information.
Optionally, the step S104 includes:
performing similarity transformation on the position information of the key points of the target mask chartlet according to the target position information of the key points of the human face;
acquiring anchor points required by the mapping;
and aligning the key points of the target mask chartlet with the key points of the human face according to the anchor points.
Specifically, according to the key point information of the face, the RGBA type mask data which is screened out is subjected to similarity transformation relative to the face. After transformation, the image is pasted by taking the nose as an anchor point, and the effect is shown in fig. 5. The method is mainly used for limiting the mapping range during mapping, and the eyes are not shielded as much as possible. Otherwise, the experiment is greatly influenced
Fig. 6 is a flowchart of steps of another embodiment of a face recognition method according to the present invention, as shown in fig. 6, specifically:
s601, building a collection test environment;
s602, collecting pictures wearing various masks at different angles;
s603, labeling mask data (including mask key points and contours);
s604, generating an RGBA format map library according to the labeling outline;
s605, aligning the human face;
s606, detecting key points of the human face;
s607, estimating the human face posture;
s608, randomly selecting an image which is closest to the human face posture in the mask library;
s609, comparing the key points of the face with the key points of the mask;
s610, attaching the mask to the face aligned with the mask;
the embodiment of the invention provides a face recognition method, which comprises the steps of obtaining a face image to be pasted; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.
Another embodiment of the present invention provides a face recognition apparatus, configured to execute the face recognition method provided in the foregoing embodiment.
Referring to fig. 7, a block diagram of a structure of an embodiment of a face recognition apparatus of the present invention is shown, where the apparatus may be applied in a video network, and specifically may include the following modules: an obtaining module 701, a sending module 702, an inquiring module 703 and a processing module 704, wherein:
the obtaining module 701 is used for obtaining a face image to be pasted;
the calculation module 702 is configured to calculate target position information and target pose angle information of a face of a person in the face image;
the determining module 703 is configured to determine a target mask map corresponding to the face target attitude angle information according to the face target attitude angle information and a pre-established mask map library;
the alignment module 704 is configured to align the key points of the target mask overlay with the key points of the face according to the position information of the key points of the target mask overlay and the target position information of the key points of the face;
the chartlet module 705 is configured to attach the target mask chartlet to the face image to be chartled, so as to obtain a target face image;
the recognition module 706 is configured to perform face recognition on the target face image.
The embodiment of the invention provides a face recognition device, which is used for obtaining a face image to be pasted; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.
The present invention further provides a supplementary explanation for the face recognition apparatus provided in the above embodiment.
Optionally, the calculation module is specifically configured to:
and processing the face image through a preset detection algorithm to obtain target position information of the key points of the face and face target attitude angle information, wherein the face target attitude angle information comprises the size and the direction of an attitude angle.
Optionally, the apparatus further comprises a model building module, configured to:
the method comprises the steps of collecting face images wearing different types of masks, wherein the face images wearing different types of masks are shot at different angles and different illumination intensities;
acquiring mask contour information and key point information in the face images wearing different types of masks;
according to the mask contour information, matting the face images of different types of masks to obtain the mask paste library, wherein the mask paste library is in an RGBA format.
Optionally, the obtaining module is specifically configured to:
acquiring a plurality of face images, wherein the face images do not contain a mask;
and aligning the plurality of face images to obtain the face image to be pasted.
Optionally, the determining module is configured to:
and searching a mask chartlet which is closest to the Euclidean distance of the face target attitude angle information in a pre-established mask chartlet library according to the face target attitude angle information.
Optionally, the alignment module is specifically configured to:
performing similarity transformation on the position information of the key points of the target mask chartlet according to the target position information of the key points of the human face;
acquiring anchor points required by the mapping;
and aligning the key points of the target mask chartlet with the key points of the human face according to the anchor points.
It should be noted that the respective implementable modes in the present embodiment may be implemented individually, or may be implemented in combination in any combination without conflict, and the present application is not limited thereto.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiment of the invention provides a face recognition device, which is used for obtaining a face image to be pasted; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.
Still another embodiment of the present invention provides a terminal device, configured to execute the face recognition method provided in the foregoing embodiment.
Fig. 8 is a schematic structural diagram of a terminal device of the present invention, and as shown in fig. 8, the terminal device includes: at least one processor 801 and memory 802;
the memory stores a computer program; the at least one processor executes the computer program stored in the memory to implement the face recognition method provided by the above-mentioned embodiments.
The terminal device provided by the embodiment obtains a face image to be pasted; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.
Yet another embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed, the method for face recognition provided in any of the above embodiments is implemented.
According to the computer-readable storage medium of the embodiment, a face image to be pasted is obtained; calculating target position information and human face target attitude angle information of human face key points in the human face image; determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library; aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face; pasting a target mask picture on a face image to be pasted to obtain a target face image; and carrying out face recognition on the target face image. By transferring the mask sample to the database of the face image without wearing the mask, more qualitative data can be obtained, and after the data is trained, the face recognition performance can be improved.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, electronic devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing electronic device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing electronic device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing electronic devices to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing electronic device to cause a series of operational steps to be performed on the computer or other programmable electronic device to produce a computer implemented process such that the instructions which execute on the computer or other programmable electronic device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or electronic device that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or electronic device. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or electronic device that comprises the element.
The above detailed description is made on a face recognition method and a face recognition device provided by the present invention, and the principle and the implementation of the present invention are explained by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A face recognition method, comprising:
acquiring a face image to be pasted;
calculating target position information and human face target attitude angle information of human face key points in the human face image;
determining a target mask chartlet corresponding to the face target attitude angle information according to the face target attitude angle information and a pre-established mask chartlet library;
aligning the key points of the target mask chartlet with the key points of the human face according to the position information of the key points of the target mask chartlet and the target position information of the key points of the human face;
pasting the target mask picture on the face image to be pasted to obtain a target face image;
and carrying out face recognition on the target face image.
2. The recognition method according to claim 1, wherein the calculating target position information and target pose angle information of the human face at the key points in the human face image comprises:
and processing the face image through a preset detection algorithm to obtain target position information of the key points of the face and face target attitude angle information, wherein the face target attitude angle information comprises the size and the direction of an attitude angle.
3. The identification method according to claim 1, wherein the mask sticker library created in advance is created in the following manner:
the method comprises the steps of collecting face images wearing different types of masks, wherein the face images wearing different types of masks are shot at different angles and different illumination intensities;
acquiring mask contour information and key point information in the face images wearing different types of masks;
according to the mask contour information, matting the face images of different types of masks to obtain the mask paste library, wherein the mask paste library is in an RGBA format.
4. The identification method according to claim 1, wherein the obtaining of the face image to be pasted comprises:
acquiring a plurality of face images, wherein the face images do not contain a mask;
and aligning the plurality of face images to obtain the face image to be pasted.
5. The recognition method according to claim 1, wherein the determining a target mask map corresponding to the face target pose angle information based on the face target pose angle information and a pre-established mask map library comprises:
and searching a mask chartlet which is closest to the Euclidean distance of the face target attitude angle information in a pre-established mask chartlet library according to the face target attitude angle information.
6. The recognition method according to claim 1, wherein the aligning the key points of the target mask map with the key points of the face according to the position information of the key points of the target mask map and the target position information of the key points of the face comprises:
performing similarity transformation on the position information of the key points of the target mask chartlet according to the target position information of the key points of the human face;
acquiring anchor points required by the mapping;
and aligning the key points of the target mask chartlet with the key points of the human face according to the anchor points.
7. A face recognition apparatus, comprising:
the acquisition module is used for acquiring a face image to be pasted;
the calculation module is used for calculating target position information and human face target attitude angle information of human face key points in the human face image;
the determining module is used for determining a target mask chartlet corresponding to the human face target attitude angle information according to the human face target attitude angle information and a pre-established mask chartlet library;
the alignment module is used for aligning the key points of the target mask map with the key points of the human face according to the position information of the key points of the target mask map and the target position information of the key points of the human face;
the mapping module is used for mapping the target mask on the face image to be mapped to obtain a target face image;
and the recognition module is used for carrying out face recognition on the target face image.
8. The identification device of claim 7, wherein the device further comprises a model building module configured to:
the method comprises the steps of collecting face images wearing different types of masks, wherein the face images wearing different types of masks are shot at different angles and different illumination intensities;
acquiring mask contour information and key point information in the face images wearing different types of masks;
according to the mask contour information, matting the face images of different kinds of masks to obtain the mask paste library.
9. A terminal device, comprising: at least one processor and memory;
the memory stores a computer program; the at least one processor executes the computer program stored by the memory to implement the face recognition method of any one of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when executed, implements the face recognition method of any one of claims 1-6.
CN202010424132.2A 2020-05-19 2020-05-19 Face recognition method, face recognition device, terminal equipment and storage medium Pending CN111695431A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010424132.2A CN111695431A (en) 2020-05-19 2020-05-19 Face recognition method, face recognition device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010424132.2A CN111695431A (en) 2020-05-19 2020-05-19 Face recognition method, face recognition device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111695431A true CN111695431A (en) 2020-09-22

Family

ID=72477247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010424132.2A Pending CN111695431A (en) 2020-05-19 2020-05-19 Face recognition method, face recognition device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111695431A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257512A (en) * 2020-09-25 2021-01-22 福建天泉教育科技有限公司 Indirect eye state detection method and computer-readable storage medium
CN112507963A (en) * 2020-12-22 2021-03-16 华南理工大学 Automatic generation and mask face identification method for mask face samples in batches
CN113011277A (en) * 2021-02-25 2021-06-22 日立楼宇技术(广州)有限公司 Data processing method, device, equipment and medium based on face recognition
US20220327864A1 (en) * 2020-10-12 2022-10-13 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method, device employing method, and readable storage medium
CN115205951A (en) * 2022-09-16 2022-10-18 深圳天海宸光科技有限公司 Wearing mask face key point data generation method
CN116503842A (en) * 2023-05-04 2023-07-28 北京中科睿途科技有限公司 Facial pose recognition method and device for wearing mask for intelligent cabin

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107609481A (en) * 2017-08-14 2018-01-19 百度在线网络技术(北京)有限公司 The method, apparatus and computer-readable storage medium of training data are generated for recognition of face
CN108319943A (en) * 2018-04-25 2018-07-24 北京优创新港科技股份有限公司 A method of human face recognition model performance under the conditions of raising is worn glasses
CN108875511A (en) * 2017-12-01 2018-11-23 北京迈格威科技有限公司 Method, apparatus, system and the computer storage medium that image generates
CN108985257A (en) * 2018-08-03 2018-12-11 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109819316A (en) * 2018-12-28 2019-05-28 北京字节跳动网络技术有限公司 Handle method, apparatus, storage medium and the electronic equipment of face paster in video
CN109977841A (en) * 2019-03-20 2019-07-05 中南大学 A kind of face identification method based on confrontation deep learning network
CN110399764A (en) * 2018-04-24 2019-11-01 华为技术有限公司 Face identification method, device and computer-readable medium
CN110852942A (en) * 2019-11-19 2020-02-28 腾讯科技(深圳)有限公司 Model training method, and media information synthesis method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN107609481A (en) * 2017-08-14 2018-01-19 百度在线网络技术(北京)有限公司 The method, apparatus and computer-readable storage medium of training data are generated for recognition of face
CN108875511A (en) * 2017-12-01 2018-11-23 北京迈格威科技有限公司 Method, apparatus, system and the computer storage medium that image generates
CN110399764A (en) * 2018-04-24 2019-11-01 华为技术有限公司 Face identification method, device and computer-readable medium
CN108319943A (en) * 2018-04-25 2018-07-24 北京优创新港科技股份有限公司 A method of human face recognition model performance under the conditions of raising is worn glasses
CN108985257A (en) * 2018-08-03 2018-12-11 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109819316A (en) * 2018-12-28 2019-05-28 北京字节跳动网络技术有限公司 Handle method, apparatus, storage medium and the electronic equipment of face paster in video
CN109977841A (en) * 2019-03-20 2019-07-05 中南大学 A kind of face identification method based on confrontation deep learning network
CN110852942A (en) * 2019-11-19 2020-02-28 腾讯科技(深圳)有限公司 Model training method, and media information synthesis method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257512A (en) * 2020-09-25 2021-01-22 福建天泉教育科技有限公司 Indirect eye state detection method and computer-readable storage medium
CN112257512B (en) * 2020-09-25 2023-04-28 福建天泉教育科技有限公司 Indirect eye state detection method and computer readable storage medium
US20220327864A1 (en) * 2020-10-12 2022-10-13 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method, device employing method, and readable storage medium
US11922724B2 (en) * 2020-10-12 2024-03-05 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Face recognition method utilizing a face recognition model and a face sample library to detect mask images
CN112507963A (en) * 2020-12-22 2021-03-16 华南理工大学 Automatic generation and mask face identification method for mask face samples in batches
CN112507963B (en) * 2020-12-22 2023-08-25 华南理工大学 Automatic generation of batch mask face samples and mask face recognition method
CN113011277A (en) * 2021-02-25 2021-06-22 日立楼宇技术(广州)有限公司 Data processing method, device, equipment and medium based on face recognition
CN113011277B (en) * 2021-02-25 2023-11-21 日立楼宇技术(广州)有限公司 Face recognition-based data processing method, device, equipment and medium
CN115205951A (en) * 2022-09-16 2022-10-18 深圳天海宸光科技有限公司 Wearing mask face key point data generation method
CN116503842A (en) * 2023-05-04 2023-07-28 北京中科睿途科技有限公司 Facial pose recognition method and device for wearing mask for intelligent cabin
CN116503842B (en) * 2023-05-04 2023-10-13 北京中科睿途科技有限公司 Facial pose recognition method and device for wearing mask for intelligent cabin

Similar Documents

Publication Publication Date Title
CN111695431A (en) Face recognition method, face recognition device, terminal equipment and storage medium
US11455496B2 (en) System and method for domain adaptation using synthetic data
CN109408653B (en) Human body hairstyle generation method based on multi-feature retrieval and deformation
CN108764048B (en) Face key point detection method and device
CN110580723B (en) Method for carrying out accurate positioning by utilizing deep learning and computer vision
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
CN105740780B (en) Method and device for detecting living human face
US9942535B2 (en) Method for 3D scene structure modeling and camera registration from single image
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
US20180357819A1 (en) Method for generating a set of annotated images
CN105139007B (en) Man face characteristic point positioning method and device
CN113177977B (en) Non-contact three-dimensional human body size measuring method
CN109919007B (en) Method for generating infrared image annotation information
CN110263768A (en) A kind of face identification method based on depth residual error network
CN110689573B (en) Edge model-based augmented reality label-free tracking registration method and device
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
Campomanes-Alvarez et al. Computer vision and soft computing for automatic skull–face overlay in craniofacial superimposition
CN106485186A (en) Image characteristic extracting method, device, terminal device and system
CN112634125A (en) Automatic face replacement method based on off-line face database
JP4521568B2 (en) Corresponding point search method, relative orientation method, three-dimensional image measurement method, corresponding point search device, relative orientation device, three-dimensional image measurement device, corresponding point search program, and computer-readable recording medium recording the corresponding point search program
CN110942092B (en) Graphic image recognition method and recognition system
WO2013160663A2 (en) A system and method for image analysis
JP2004188201A (en) Method to automatically construct two-dimensional statistical form model for lung area
CN115937708A (en) High-definition satellite image-based roof information automatic identification method and device
Patterson et al. Landmark-based re-topology of stereo-pair acquired face meshes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination