CN114882550B - Face registration bottom-reserving method, device and equipment - Google Patents
Face registration bottom-reserving method, device and equipment Download PDFInfo
- Publication number
- CN114882550B CN114882550B CN202210391081.7A CN202210391081A CN114882550B CN 114882550 B CN114882550 B CN 114882550B CN 202210391081 A CN202210391081 A CN 202210391081A CN 114882550 B CN114882550 B CN 114882550B
- Authority
- CN
- China
- Prior art keywords
- face
- data
- parameterized
- user
- weight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000008859 change Effects 0.000 claims description 39
- 238000012545 processing Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 12
- 210000004209 hair Anatomy 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 5
- 230000008602 contraction Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 21
- 210000004709 eyebrow Anatomy 0.000 description 6
- 230000000737 periodic effect Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 210000000887 face Anatomy 0.000 description 5
- 230000001680 brushing effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000014759 maintenance of location Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Collating Specific Patterns (AREA)
Abstract
The embodiment of the specification discloses a face registration bottoming method, a device and equipment. The scheme comprises the following steps: acquiring a 3D face image of a user face, and determining a 3D face point cloud of the user face according to the 3D face image; according to the 3D face point cloud, predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model; fitting the preset 3D reference face grid according to fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D parameterized face data is used as first parameterized data; if the 3D parameterized face data of the user is registered for the reservation, taking the 3D parameterized face data of the registered user for the reservation as second parameterized data, and determining a timestamp of the second parameterized data; and generating third parameterized data serving as the face data left for user registration according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for face registration.
Background
With the development of computer and internet technologies, many businesses adopt face recognition to perform identity verification on users, such as face-brushing payment, face-brushing entrance guard, and the like.
Before face recognition, the user needs to reserve advanced face registration for the purpose of obtaining face data of the user registration for the purpose of carrying out face recognition on the user by comparing the face data of the user to be recognized with the face data of the user registration for the purpose of carrying out face recognition on the user.
Because the face data of the user is dynamically changed, in the face recognition process, in order to effectively avoid the situation that the accuracy of face recognition is reduced due to the influence of the old of the face data with the left registered end, the face data with the left registered end of the user is generally updated according to the new and old time. However, the method is dead only according to the new and old time factors, so that the accuracy of face recognition cannot be optimized.
Based on this, a more reliable face registration bottoming scheme is needed.
Disclosure of Invention
One or more embodiments of the present disclosure provide a face registration method, device, apparatus and storage medium, so as to solve the following technical problems: there is a need for a more reliable face registration bottoming scheme.
To solve the above technical problems, one or more embodiments of the present specification are implemented as follows:
one or more embodiments of the present disclosure provide a face registration foothold method, including:
acquiring a 3D face image of a user face, and determining a 3D face point cloud of the user face according to the 3D face image;
according to the 3D face point cloud, predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model;
Fitting a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes which are connected with each other, and each vertex is provided with face semantics respectively;
If the 3D parameterized face data of the user is registered for a reservation, taking the registered 3D parameterized face data of the reserved user as second parameterized data, and determining a time stamp of the second parameterized data;
And generating third parameterized data serving as the face data of the user registration reserved according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data.
One or more embodiments of the present disclosure provide a face registration foothold device, including:
The acquisition module acquires a 3D face image of a user face and determines a 3D face point cloud of the user face according to the 3D face image;
The fitting parameter prediction module predicts fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model according to the 3D face point cloud;
The parameterization module is used for carrying out fitting processing on a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes as first parameterized data, the vertexes have a connection relationship, and face semantics are respectively arranged on the vertexes;
The determining module is used for taking the 3D parameterized face data of the user with the registered bottom as second parameterized data and determining a timestamp of the second parameterized data if the 3D parameterized face data of the user with the registered bottom is registered;
And the generation module is used for generating third parameterized data serving as the face data with the left end for the user registration according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data.
One or more embodiments of the present specification provide a non-volatile computer storage medium storing computer-executable instructions configured to:
acquiring a 3D face image of a user face, and determining a 3D face point cloud of the user face according to the 3D face image;
according to the 3D face point cloud, predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model;
Fitting a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes which are connected with each other, and each vertex is provided with face semantics respectively;
If the 3D parameterized face data of the user is registered for a reservation, taking the registered 3D parameterized face data of the reserved user as second parameterized data, and determining a time stamp of the second parameterized data;
And generating third parameterized data serving as the face data of the user registration reserved according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data.
The above-mentioned at least one technical solution adopted by one or more embodiments of the present disclosure can achieve the following beneficial effects:
By taking the 3D parameterized face data as the face data of the user registration reserved, the method can realize that only one 3D parameterized face data of the user needs to be reserved, the storage cost is greatly reduced, and when the 3D parameterized face data of the registration reserved is updated, the identity information carried by the new and old 3D parameterized face data of the user is fully reserved by generating third parameterized data, so that face recognition and comparison are facilitated, and the performance of a 3D face recognition system can be optimized.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a face registration method according to one or more embodiments of the present disclosure;
fig. 2 is a schematic diagram of a 2D face diagram in an application scenario provided in one or more embodiments of the present disclosure;
fig. 3 is a schematic diagram of a 3D face diagram in an application scenario provided in one or more embodiments of the present disclosure;
fig. 4 is a schematic diagram of a 3D face point cloud in an application scenario provided in one or more embodiments of the present disclosure;
fig. 5 is a schematic diagram of 2D face registration with a bottom in an application scenario provided in one or more embodiments of the present disclosure;
Fig. 6 is a schematic diagram of 3D face registration with a background in an application scenario provided in one or more embodiments of the present disclosure;
Fig. 7 is a schematic diagram of an implementation process of 3D parameterized face data in an application scenario according to one or more embodiments of the present disclosure;
fig. 8 is a schematic diagram of 3D parameterized face data in an application scenario provided in one or more embodiments of the present disclosure;
fig. 9 is a schematic view of 3D parameterized face registration in an application scenario provided in one or more embodiments of the present disclosure;
Fig. 10 is a schematic flow chart of a face registration bottoming method in an application scenario provided in one or more embodiments of the present disclosure;
Fig. 11 is a schematic flow chart of a face recognition method in an application scenario according to one or more embodiments of the present disclosure;
fig. 12 is a schematic structural diagram of a face registration bottoming device according to one or more embodiments of the present disclosure;
fig. 13 is a schematic structural diagram of a face registration bottoming device according to one or more embodiments of the present disclosure.
Detailed Description
The embodiment of the specification provides a face registration bottoming method, a device, equipment and a storage medium.
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
Fig. 1 is a flow chart of a face registration method according to one or more embodiments of the present disclosure. The method can be applied to different business fields, such as offline automatic payment field, internet financial business field, electric business field, instant messaging business field, game business field, public business field and the like. The process may be performed by computing devices in the relevant field (e.g., face-brushing payment machines in offline payments, etc.), with some input parameters or intermediate results in the process allowing for manual intervention adjustments to help improve accuracy.
In order to explain the scheme shown in fig. 1, face recognition in the conventional scheme is first described.
Fig. 2, fig. 3, and fig. 4 are schematic diagrams of a 2D face map, a 3D face map, and a 3D face point cloud in an application scenario according to one or more embodiments of the present disclosure, where, as can be seen from the figures, the 3D face map and the 3D face point cloud are less recognizable to naked eyes, and the 3D face recognition has obvious advantages in privacy protection compared with the 2D face recognition. It should be noted that the 2D face image is actually an RGB image, and is not a face line image presented in fig. 2.
For 3D face recognition systems, users are generally face-recognized through 3D face diagrams and 3D face point clouds, so in 3D face recognition systems, 3D face data of users are generally stored, and compared with 2D face recognition systems storing 2D face diagrams of users, the privacy protection of 3D face recognition systems is relatively high.
Further, the description is first made of face registration in the conventional scheme.
Fig. 5 and fig. 6 are respectively a schematic view of 2D face registration and a schematic view of 3D face registration and bottom retention in an application scenario provided in one or more embodiments of the present disclosure, and as can be seen from the figures, fig. 5 includes 2D faces collected by users in different time periods, the second (rightmost 2D face image) is compared with the first (leftmost 2D face image), glasses are worn, and the third (bottommost 2D face) is compared with the first, and is captured in an environment with relatively dark light, so that based on storing a plurality of 2D face images of the users, accuracy of face recognition can be increased. The 3D face recognition system is similar to the 2D face recognition system in the scheme, and mainly stores 3D face images (or 3D face point clouds) of multiple users, and the conventional scheme has the defects that when users register faces and leave bottoms, the 3D face images (the 3D face images may be multiple pieces) of the users are usually required to be collected in advance, the 3D face images are used as face data of the users registered and leave bottoms, or the 3D face point clouds are generated based on the 3D face images, and the 3D face point clouds are used as face data of the users registered and leave bottoms. At this time, if the 3D face map is a plurality of pieces, a plurality of pieces of face data with the user registered and left are generated, so that the occupied storage overhead is relatively large, and the analysis performance of the server is reduced.
In addition, because the face data of the user is in a dynamic change process, for example, if the retention time of the face data of the user for registering the retention is early, the face shape of the user is greatly changed due to some objective reasons. Therefore, in the face recognition process, in order to effectively avoid the situation that the accuracy of face recognition is reduced due to the influence of old and old face data of the registered and reserved face, the face data of the registered and reserved face of the user is generally updated according to the time of old and reserved face data. However, the method is dead only according to the new and old time factors, so that the accuracy of face recognition cannot be optimized.
Based on this, the flow in fig. 1 may include the following steps:
S102: and acquiring a 3D face image of the face of the user, and determining a 3D face point cloud of the face of the user according to the 3D face image.
In the process of executing the service (such as face-brushing payment and face-brushing entrance guard), the user triggers the face recognition process, and the face map of the user needs to be acquired. The user may trigger face recognition on a general purpose device such as a smart phone, personal PC, or on a dedicated face brushing device (e.g., a face brushing vending machine, a self-service face brushing ordering machine, a face brushing entrance guard device).
Because the 3D face map is acquired based on depth information, the 3D face map needs to be converted into a 3D face point cloud so as to facilitate the subsequent processing of the 3D face map.
In the process of collecting the 3D face image, the collected 3D face image may include other parts of the user, for example, the upper body of the user in the 3D face image due to the standing of the user. At this time, the converted 3D point cloud also includes a point cloud corresponding to the upper body of the user. Based on the method, target detection is carried out in the 3D point cloud through a pre-trained 3D target detection model, and the 3D face point cloud corresponding to the user is extracted. The 3D object detection model may be based on deep learning training, such as a convolutional neural network.
S104: and predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model according to the 3D face point cloud.
In the conventional scheme, the 3D face reconstruction can be performed in other manners. For example, a certain number of 3D faces are collected in advance, an average shape and an average texture portion of the faces are obtained, and then feature vectors of a covariance matrix arranged in descending order of feature values are obtained to represent the shape and texture information of the faces, so that a 3D model can be generated in advance. After the 2D face image of the user is obtained, rendering parameters are obtained according to the 2D face image, and the rendering parameters and a pre-generated 3D model are optimized together, so that the finally generated 3D image is as close to the 2D face image of the user as possible, and the purpose of recovering 3D face information from the 2D face image is achieved. However, the scheme is very dependent on the quality of the 2D face map itself, and when the quality of the 2D face map is poor, the recovered 2D face information is also poor naturally.
Fig. 7 is a schematic diagram of an implementation process of 3D parameterized face data in an application scenario according to one or more embodiments of the present disclosure, where in this embodiment, parameter prediction is not performed through a 2D face map, but a 3D face point cloud is used as input, and fitting parameters required during 3D face reconstruction are predicted through a pre-trained fitting parameter prediction model. Compared with the 2D face map, the data contained in the 3D face point cloud are more comprehensive and more accurate, so that the accuracy of the predicted fitting parameters can be improved. The fitting parameter prediction can be obtained by training a neural network model through deep learning.
S106: fitting a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes as first parameterized data, the vertexes have a connection relationship, and face semantics are respectively arranged on the vertexes.
Fig. 7 is a schematic diagram of an implementation process of 3D parameterized face data in an application scenario provided in one or more embodiments of the present disclosure, where a 3D reference face mesh is pre-built, and the 3D reference face mesh is different from a 3D model in a traditional scheme, and includes a plurality of vertices, where the plurality of vertices have a connection relationship, and each vertex is provided with face semantics, and at this time, it may be considered that the 3D reference face network has parameterized each vertex included in the 3D reference face mesh, so after obtaining fitting parameters, fitting processing may be performed based on the fitting parameters, thereby obtaining 3D parameterized face data. For example, the fitting parameters may be α 1,α2...αN shown in FIG. 7, and the fitting parameters are obtained byFitting is performed. Wherein S new represents parameterized 3D face data,/>And representing the original 3D face point cloud of the user, s i is the representation of the face shape in different dimensions, and alpha i is the coefficient corresponding to s i.
The parameterization in the scheme can be understood that corresponding parameterization rules (such as the number of vertices, the connection relationship between vertices, the face semantics of the vertices, the position coordinates of key points, the distance between edge points and other points, etc.) are set for each vertex in the 3D reference face mesh, the fitting parameters are taken as input parameters, and the set parameterization rules are not changed for the fitting of the 3D reference face mesh, so that the output obtained 3D parameterized face data still accords with the parameterization rules, which is difficult to realize in the traditional scheme.
Specifically, the explanation is given by way of example with respect to some of the parameterized rules. In the 3D reference face mesh, a preset number of vertices (for example, 15000 vertices are included) are provided, and explicit face semantics are given to each vertex (for example, the 5000 th vertex represents a nose point in a face), and a fixed first connection topological relation is provided between the vertices, and the connection relation between the vertices is kept unchanged.
After the fitting parameters are obtained, the position coordinates of at least part of the vertexes in the 3D reference face grid are adjusted according to the basis vector weights in the fitting parameters on the basis of maintaining the connected first topological relation. In this way, in the obtained 3D parameterized face data, the total number of the vertexes, the first connection topological relation among the vertexes and the face semantics represented by the vertexes are not changed, and the parameterization of the 3D face image is realized through fitting parameters and 3D reference face grids. Fig. 8 is a schematic diagram of 3D parameterized face data in an application scenario according to one or more embodiments of the present disclosure, where the details of a 3D face are better than the face point cloud of the original 3D, so as to increase accuracy of face recognition.
S108: and if the 3D parameterized face data of the user is registered for a reservation, taking the registered 3D parameterized face data of the reserved user as second parameterized data, and determining a time stamp of the second parameterized data.
The user registers and reserves the bottom through the corresponding terminal in advance, when registering and reserving the bottom, the user inputs the user identity mark at the terminal through the user, and the 3D camera shoots the face photo of the user, and the identity mark and the face photo are uploaded to the computing equipment.
Generally, according to a face photo, a 3D face image of a face of a user is obtained, parameterized in the above manner to obtain first parameterized data, the first parameterized data is directly bound with an identity, and the first parameterized data is registered for a background.
But for some special reasons, (e.g. if the user has forgotten to register the relevant account) there are cases where the user registers multiple times, and the user is typically allowed to register only once.
Therefore, after the first parameterized data and the identity mark are obtained, whether the 3D parameterized face data of the user with the registered reserved bottom is found in the 3D face database according to the identity mark, and whether the 3D parameterized face data of the user with the registered reserved bottom is judged.
If not, the user is a new registered user, and the first parameterized data is registered according to the identity of the user and the first parameterized data, so that the user is left. For example, the identity of the user (e.g., the user ID) and the 3D parameterized data generated in step 106 are stored as a record in a 3D face database.
If the 3D parameterized face data of the user with the registered footage indicates that the user is an old registered user, notification information is also returned to the user to inform the user that registration has been completed before. The timestamp of the second parameterized data may be obtained according to a time corresponding to the second parameterized data generated, or may be obtained according to a time corresponding to the second parameterized data stored in the 3D face database.
Based on this, fig. 9 is a schematic diagram of 3D parameterized face registration with a bottom in one or more embodiments of the present disclosure, and as can be seen from fig. 6 and 9, compared with the conventional scheme in which a 3D face map or a 3D face point cloud is used as face data with a bottom registered by a user, by using the 3D parameterized face data as the face data with a bottom registered by a user, only one piece of 3D parameterized face data of the user needs to be saved, and the storage overhead is greatly reduced.
Next, when the user triggers the face recognition process, the server performs face recognition on the user according to the first parameterized data, and if the recognition is successful, step S110 is performed. If the identification fails, the step S110 is not continued.
S110: and generating third parameterized data serving as the face data of the user registration reserved according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data.
In the traditional scheme, when updating the face data of the registered and reserved user, a new 3D face image is replaced with an old 3D face image in the 3D face library in a preset period, and the method is relatively dead only according to the new and old factors of time.
Based on the above, by using the 3D parameterized face data as the face data of the user registration base, instead of using the 3D face map or the 3D face point cloud as the face data of the user registration base, an update scheme corresponding to the 3D parameterized face data of the registration base is provided, that is, instead of replacing the second parameterized data with the first parameterized data, third parameterized data is directly generated, and the third parameterized data is used as the face data of the user registration base, and then the second parameterized data is deleted from the 3D face database according to the identity of the user.
In summary, identity information carried by new and old 3D parameterized face data of the user is fully reserved, face recognition and comparison are facilitated, and the performance of the 3D face recognition system can be optimized.
Based on the method of fig. 1, the present specification also provides some specific embodiments and extensions of the method, and the following description will proceed.
In one or more embodiments of the present disclosure, a 3D face point cloud of a user is determined and a prediction of fitting parameters is performed based on the 3D face point cloud, as described above. In order to increase the accuracy of the prediction of the fitting parameters and the smoothness of the fitting process performed on the 3D reference face mesh, after the 3D face point cloud of the user is obtained, the 3D face point cloud may be subjected to key point detection by using a pre-trained face key point detection model (for example, it may be similar to the 3D target detection model and generated by using convolutional neural network training), so as to extract key point information of the face of the user. For example, the keypoint information may include the tip of the nose, corner of the mouth, pupil, etc.
And after the key point information is extracted, predicting the fitting parameters through the key point information and the 3D face point cloud, so that the prediction accuracy of the fitting parameters is improved. As shown in fig. 7, when fitting is performed on the 3D reference face mesh, the key point information may be fitted first, and in the fitting process of other vertices, adaptive smoothing is performed based on the key point information, so as to increase smoothness of the 3D parameterized face data.
Based on the key point information, corresponding key point information in the first parameterized data and the second parameterized data is obtained. As can be seen from the above description, when the user triggers the face recognition process, the server performs face recognition on the user according to the first parameterized data, and if the recognition is successful, the server updates the second parameterized data through the first parameterized data.
It should be noted that, the 3D face database includes face data of a registered base corresponding to each user, and after the server obtains the first parameterized data in the process of face recognition of the user, the 3D parameterized face data is compared with the face data of each registered base, the face data of the registered base with the highest similarity is selected, and the user is identified according to the user information corresponding to the face data of the registered base, so as to verify the identity information of the user. For this user, the face data with the highest similarity registered and left can be regarded as the second parameterized data, so the following will describe in detail how to determine the similarity between the first parameterized data and the second parameterized data.
Specifically, first, corresponding second connection topological relations are generated according to the position coordinates of the key points, the second connection topological relations only comprise key points, the second connection topological relations are generated according to the position coordinates of the key points, the second connection topological relations only comprise key point information, and the second connection topological relations no longer comprise information of other vertexes. The key point information is the vertex of the core part in the face, which can reflect the basic information of the face, and when the similarity between the first parameterized data and the second parameterized data is determined through the second connection topological relation (for example, the similarity is determined through Euclidean distance between the first parameterized data and the second parameterized data, the calculation speed when the first parameterized data and the second parameterized data are compared in a fitting way, the calculation resource consumption and the like), the accurate recognition effect can be realized on the premise of consuming less calculation resources, and therefore, when the similarity exceeds a preset threshold, the 3D face recognition of the user is determined to be successful in recognition.
In one or more embodiments of the present disclosure, since the new 3D parameterized face data of the user better conforms to the actual face feature of the user at present, the reference value of the new 3D parameterized face data of the user is greater than the reference value of the old 3D parameterized face data, that is, the reference value of the first parameterized data is greater than the reference value of the second parameterized data, and therefore, the weight of the first parameterized data and the weight of the second parameterized data need to be combined to generate the third parameterized data.
Specifically, first, according to the timestamp of the second parameterized data, the weight of the first parameterized data and the weight of the second parameterized data are determined. And then, according to the weight of the first parameterized data and the weight of the second parameterized data, carrying out weighted fusion on the first parameterized data and the second parameterized data to generate third parameterized data.
Further, when determining the weight of the first parametric data and the weight of the second parametric data, the further the timestamp of the second parametric data is from the timestamp of the first parametric data, the further the timestamp of the second parametric data is from the current time, the smaller the reference value of the second parametric data is, so that the timestamp of the first parametric data and the timestamp of the second parametric data need to be measured to determine the weight of the first parametric data and the weight of the second parametric data.
Specifically, the time stamp of the first parametric data is first determined, and then the time difference between the time stamp of the first parametric data and the time stamp of the second parametric data is determined. Wherein the larger the time difference, the further the timestamp of the second parametric data is from the current time.
In order to effectively ensure that the third parameterized data more accords with the face data of the user, the further the timestamp of the second parameterized data is from the current time, the lower the weight of the second parameterized data is. Therefore, the weight of the second parametric data and the weight of the first parametric data are determined according to the negative correlation set between the time difference and the weight of the second parametric data. Wherein the negative correlation refers to the larger the time difference, the lower the second parameterized weight.
Further, if the time stamp of the second parametric data is very far from the current time, the time difference will be very large, resulting in a large calculation pressure, so that when the weight of the second parametric data and the weight of the first parametric data are determined according to the negative correlation set between the time difference and the weight of the second parametric data, the time difference can be scaled to a certain range, thereby reducing the calculation amount.
Based on this, time scale contraction processing is performed on the basis of the time difference, and a denominator term having a positive correlation with the time difference is obtained. The positive correlation relationship means that the larger the time difference is, the larger the denominator term is.
Then, the weight of the second parameterized data in a set negative correlation with the time difference is determined according to the denominator term. For example, the molecular term is unchanged, and if the denominator term is larger, the ratio is smaller, i.e. the time difference is larger, the weight of the second parameterized data is lower.
And finally, determining the weight of the first parameterized data according to the weight of the second parameterized data. That is, since the further the time stamp of the second parametric data is from the current time, the weight of the second parametric data is lower, the weight of the second parametric data is determined in advance according to the time difference, and then the weight of the first parametric data is obtained in combination with the weight of the second parametric data.
More intuitively, the above-described method of generating third parametric data is described below by way of an exemplary scheme.
For example, assume that the second parametric data is S 1,S1 composed of N (x, y, z) three-dimensional vertices, where ,S1={v1,v2,...vN},vi={xi,yi,...zi},xi is the i-th x-axis coordinate of the second parametric data, y i is the i-th y-axis coordinate of the second parametric data, and z i is the i-th z-axis coordinate of the second parametric data.
Let S 2,S2 be the first parametric data composed of N (x, y, z) three-dimensional vertices, wherein ,S2={u1,u2,...uN},ui={xi,yi,...zi},xi is the i-th x-axis coordinate of the first parametric data, y i is the i-th y-axis coordinate of the first parametric data, and z i is the i-th z-axis coordinate of the second parametric data.
When the first parametric data and the second parametric data are weighted and fused, the expression is as follows:
S new={w1,w2,...wN},wi=λ×vi+(1-λ)×ui, wherein S new is third parametric data, w i is the i-th three-dimensional vertex coordinate of the third parametric data, λ is the weight of the second parametric data, and (1- λ) is the weight of the first parametric data.
Wherein,T 1 is a time stamp of the second parameterized data, t 2 is a time stamp of the first parameterized data, and it should be noted that in the above formula, t 1 and t 2 are regarded as second-level time stamps.
In addition, in the case of the optical fiber,Represents scaling the time difference to be on the order of hours,/>For representing scaling the time difference scale to a range to facilitate computation and reduce the degree of dispersion of the data. Since/>May be 1, at this point,/>Then 0, since the denominator term must not be 0, in order to avoid this, bySo that the denominator is 1, at this time, λ is 0.5.
It will be appreciated that ifTaking a value of 1, it indicates that t 1 differs from t 2 by a relatively short period (1 hour), i.e., the timestamp of the second parametric data is closer to the timestamp of the first parametric data, and at this time, the weight of the second parametric data is half that of the first parametric data. Obviously, if t 1 differs from t 2 by a longer amount, the second parametric data (old data) may be represented more unreliable, and thus the calculated weight λ of the second parametric data is also smaller.
More intuitively, fig. 10 is a schematic flow chart of a face registration bottoming method in an application scenario provided in one or more embodiments of the present disclosure.
As shown in fig. 10, since the user 1, the user 2, and the user 3 are old registered users and the user 4 is a new registered user, the new user registration process is performed for the user 4, and the face registration update process is performed for the user 1, the user 2, and the user 3.
Specifically, for the user 4, a record is generated by using the user ID (i.e., the user ID) of the user 4 and the 3D parameterized face data, and the record is stored in the 3D face database, i.e., the 3D face database is a bottom database shown in the figure.
Taking the user 1 as an example, taking the user 1 as a user 1, taking out the first parameterized data of the user 1 from the 3D face database, carrying out weighted fusion on the first parameterized data and the second parameterized data according to the weight of the first parameterized data and the weight of the second parameterized data to generate third parameterized data, taking the third parameterized data as face data of a user registration reserved, and replacing the first parameterized data in the 3D face database, namely, storing the third parameterized data back into the 3D face database.
In one or more embodiments of the present disclosure, it is known from the above description that the user face data is in the process of dynamic change, if the footage of the second parametric data is earlier, a larger difference is generated between the second parametric data and the first parametric data, but since the second parametric data is updated usually within a preset period. For the case that the reserved time of the second parametric data is earlier, it may be that the user does not perform face recognition in one or more preset periods, so that the first parametric data cannot be obtained. Then, there are users who face recognition frequently performed in a preset period at the same time.
Based on this, the first parameterized data (of course, the second parameterized data is also included) is divided into a plurality of face regions in advance, and each region is provided with a fixed vertex, for example, 0-999 is an eyebrow region, 1000-1999 is a left eyebrow region … … -15000 is an edge region, and generally, the face region changes slowly under the influence of natural physiological factors (excluding artificial factors such as surgery and injury).
However, the face region with the period change time is taken as the appointed face region in consideration of the fact that certain face regions of the user have the period change time due to the influence of specific human factors. For example, the user's haircut time is periodic (e.g., once a month), and the user's hairstyle generally does not change for a short period of time, so that the hair covering area in the user's face area (e.g., the area occupied by the eyebrows covered by the hair in the eyebrow area) is also periodic, and the user typically makes up on weekdays, and does not make up on weekends, so that the foreign object covering area in the user's face area (e.g., the thickness and length of left eyebrows before and after make up in the left eyebrow area) is also periodic. At this time, the second parameterized weight described above may be adaptively adjusted up, so that the third parameterized data better conforms to the face data of the current user.
Specifically, one or more designated face regions and the period change time corresponding to the designated face regions in the second parameterized data are determined. The periodic change time can be set according to experience, or user specific behavior data can be collected, and the periodic change time can be predicted according to the user specific behavior data.
And then, matching the time difference with the period change time to judge whether the position in the period corresponding to the first parameterized data and the second parameterized data accords with the set consistency condition.
When the time difference is matched with the period change time, the position in the period between the first parametric data and the second parametric data is matched. The consistency condition is to ensure that the designated face area in the first parameterized data is basically consistent with the designated face area in the second parameterized data, that is, when the positions in the corresponding periods meet the set consistency condition, the user's new and old face data can be proved to be basically consistent.
For example, a user typically cuts hair at month 3, and the periodic change in hair coverage area of the user is one month, i.e., the change in hair coverage area of the user is much less, and then much more, during the period from the first month to the second month. Then, if the timestamp of the first parametric data is 3 days in the current month and the timestamp of the second parametric data is 6 days in the current month, the hair coverage area in the first parametric data is substantially identical to the hair coverage area in the second parametric data, and the set consistency condition may be considered to be met.
Based on this, even if the period change time of the first parametric data is inconsistent with the period change time of the second parametric data, but the positions in the period are basically consistent, the reference value of the second parametric data is still larger, and the second parametric data is still given a higher weight.
And finally, if not, determining that the weight of the second parameterized data is lower than that of the first parameterized data, if so, relatively up-regulating the weight of the second parameterized data, and if not, relatively higher than that of the second parameterized data.
Further, in order to more accurately perform face recognition for a user with a long set aside time while considering the influence of natural physiological factors, a face change level is set for other face regions than the specified face region based on this. The face change level refers to the degree of change of each region with time under the influence of natural physiological factors (excluding artificial factors such as surgery and injury). The higher the face change level, the faster the change level is indicated. Generally, the edge area is the highest corresponding face change level due to no bone support.
And then, compensating the weight of the second parameterized data according to the face change level. The lower the face change level is, the smaller the face possibly generates change degree is, the higher the compensation degree is, and the face change level is inversely related to the compensation degree.
And finally, if the compensation degree is larger than a preset threshold value, relatively up-regulating the weight of the second parameterized data. The higher the compensation degree is, the higher the weight of the second parametric data is, and the weight of the second parametric data is relatively higher than that of the case of no. Otherwise, if the compensation degree is smaller than or equal to the preset threshold value, the weight of the second parameterized data is adjusted relatively downwards. Therefore, the influence of time on the face shape can be combined, so that the third parameterized data more accords with the face data of the current user, and the face recognition of the user is more accurately carried out.
Fig. 11 is a schematic flow chart of a face recognition method in an application scenario according to one or more embodiments of the present disclosure. When the scheme is applied to online face-down payment, firstly, in a 3D face data acquisition and preprocessing module, a 3D face depth map of a user is acquired through a structured light 3D camera, preprocessing (such as noise reduction processing) is carried out on the 3D face depth map, and 3D face information (comprising 3D face point cloud and key point information) is obtained through face target recognition. And then, in a 3D face parameterization module, 3D face parameterization is carried out through a preset 3D reference face grid, and 3D parameterized face data are obtained. In the 3D face registration and reservation module, judging whether a user is registered and reserved according to the user ID, if the user is not registered and reserved, generating a record and storing the record into a 3D face database, if the user is registered and reserved, taking out second parameterized data, carrying out weighted fusion on the first parameterized data and the second parameterized data, generating a new record, and storing the new record into the 3D face database.
Based on the same thought, one or more embodiments of the present disclosure further provide apparatuses and devices corresponding to the above method, as shown in fig. 12 and fig. 13.
Fig. 12 is a schematic structural diagram of a face registration bottoming device according to one or more embodiments of the present disclosure, including:
The acquisition module 1202 acquires a 3D face image of a user face and determines a 3D face point cloud of the user face according to the 3D face image;
The fitting parameter prediction module 1204 predicts fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model according to the 3D face point cloud;
the parameterization module 1206 performs fitting processing on a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes as first parameterized data, the vertexes have a connection relationship, and face semantics are respectively arranged on the vertexes;
a determining module 1208, configured to, if the 3D parameterized face data of the user has been registered for a bottom, take the 3D parameterized face data of the registered user for a bottom as second parameterized data, and determine a timestamp of the second parameterized data;
the generating module 1210 generates third parameterized data as the face data of the user registration with the base according to the first parameterized data, the second parameterized data, and the timestamp of the second parameterized data.
Optionally, the generating module 1210 determines the weight of the first parametric data and the weight of the second parametric data according to the timestamp of the second parametric data;
and carrying out weighted fusion on the first parameterized data and the second parameterized data according to the weight of the first parameterized data and the weight of the second parameterized data to generate third parameterized data.
Optionally, the generating module 1210 determines a timestamp of the first parametric data;
Determining a time difference between a time stamp of the first parametric data and a time stamp of the second parametric data;
And determining the weight of the second parameterized data and the weight of the first parameterized data according to a negative correlation set between the time difference and the weight of the second parameterized data.
Optionally, the generating module 1210 performs time scale contraction processing according to the time difference to obtain a denominator term in a set positive correlation with the time difference;
Determining the weight of the second parameterized data in a set negative correlation with the time difference according to the denominator term, wherein the weight of the second parameterized data;
And determining the weight of the first parameterized data according to the weight of the second parameterized data.
Optionally, the method further comprises:
the judging module is used for judging whether the 3D parameterized face data of the user is registered and left according to the identity of the user;
if not, registering the first parameterized data according to the identity of the user and the first parameterized data, and leaving the bottom.
Optionally, the vertices in the 3D reference face mesh are fixed in number, and the connection relationship between the multiple vertices is a fixed first connection topological relationship;
The parameterization module 1206 adjusts the position coordinates of at least part of the vertices in the 3D reference face mesh to obtain 3D parameterized face data based on the basis of maintaining the first connection topology according to the basis vector weights in the fitting parameters.
Optionally, the fitting parameter prediction module 1204 performs keypoint detection on the 3D face point cloud by using a pre-trained face keypoint detection model, so as to determine the keypoint information of the user face therein;
And predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model according to the 3D face point cloud and the key point information.
Optionally, the method further comprises:
the identification module is used for determining key point information corresponding to the first parameterized data and the second parameterized data respectively;
Generating respective corresponding second connection topological relations according to the key point information;
determining the similarity between the first parameterized data and the second parameterized data through the second connection topological relation;
And determining that the similarity exceeds a preset threshold value to determine that the 3D face recognition of the user is successful.
Optionally, the generating module 1210 determines a timestamp of the first parametric data;
Determining a time difference between a time stamp of the first parametric data and a time stamp of the second parametric data;
Determining one or more designated face areas and period change time corresponding to the designated face areas in the second parameterized data;
Matching the time difference with the period change time to judge whether the position in the period corresponding to the first parameterized data and the second parameterized data accords with a set consistency condition;
if not, determining that the weight of the second parameterized data is lower than that of the first parameterized data;
If yes, relatively up-regulating the weight of the second parameterized data.
Optionally, the generating module 1210 determines a face change level corresponding to a face region other than the specified face region in the second parameterized data;
compensating the weight of the second parameterized data according to the face change level;
and if the compensation degree is greater than a preset threshold value, relatively up-regulating the weight of the second parameterized data, wherein the face change level is inversely related to the compensation degree.
Optionally, the specified face region includes at least one of the following regions: a hair covering area and a foreign object covering area.
Fig. 13 is a schematic structural diagram of a face registration foothold device provided in one or more embodiments of the present disclosure, including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a 3D face image of a user face, and determining a 3D face point cloud of the user face according to the 3D face image;
according to the 3D face point cloud, predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model;
Fitting a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes which are connected with each other, and each vertex is provided with face semantics respectively;
If the 3D parameterized face data of the user is registered for a reservation, taking the registered 3D parameterized face data of the reserved user as second parameterized data, and determining a time stamp of the second parameterized data;
And generating third parameterized data serving as the face data of the user registration reserved according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data.
Based on the same considerations, one or more embodiments of the present description provide a non-transitory computer storage medium storing computer-executable instructions corresponding to the above-described method, the computer-executable instructions configured to:
acquiring a 3D face image of a user face, and determining a 3D face point cloud of the user face according to the 3D face image;
according to the 3D face point cloud, predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model;
Fitting a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes which are connected with each other, and each vertex is provided with face semantics respectively;
If the 3D parameterized face data of the user is registered for a reservation, taking the registered 3D parameterized face data of the reserved user as second parameterized data, and determining a time stamp of the second parameterized data;
And generating third parameterized data serving as the face data of the user registration reserved according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely one or more embodiments of the present description and is not intended to limit the present description. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of one or more embodiments of the present description, is intended to be included within the scope of the claims of the present description.
Claims (21)
1. A face registration bottoming method comprises the following steps:
acquiring a 3D face image of a user face, and determining a 3D face point cloud of the user face according to the 3D face image;
according to the 3D face point cloud, predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model;
Fitting a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes which are connected with each other, and each vertex is provided with face semantics respectively;
If the 3D parameterized face data of the user is registered for a reservation, taking the registered 3D parameterized face data of the reserved user as second parameterized data, and determining a time stamp of the second parameterized data;
Generating third parameterized data as face data of the user registration reserved according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data, wherein the third parameterized data specifically comprises: and determining the weight of the first parameterized data and the weight of the second parameterized data according to the timestamp of the second parameterized data, and carrying out weighted fusion on the first parameterized data and the second parameterized data according to the weight of the first parameterized data and the weight of the second parameterized data to generate third parameterized data.
2. The method according to claim 1, wherein the determining the weight of the first parametric data and the weight of the second parametric data according to the timestamp of the second parametric data specifically comprises:
determining a timestamp of the first parametric data;
Determining a time difference between a time stamp of the first parametric data and a time stamp of the second parametric data;
And determining the weight of the second parameterized data and the weight of the first parameterized data according to a negative correlation set between the time difference and the weight of the second parameterized data.
3. The method according to claim 2, wherein determining the weight of the second parametric data and the weight of the first parametric data according to the negative correlation set between the time difference and the weight of the second parametric data specifically comprises:
Performing time scale shrinkage processing according to the time difference to obtain denominator items which have a set positive correlation relationship with the time difference;
determining the weight of the second parameterized data in a set negative correlation with the time difference according to the denominator term;
And determining the weight of the first parameterized data according to the weight of the second parameterized data.
4. The method of claim 1, wherein if the 3D parameterized face data of the user has been registered for a floor, the method further comprises, before taking the registered floor 3D parameterized face data of the user as second parameterized data:
Judging whether 3D parameterized face data of the user is registered for reservation or not according to the identity of the user;
if not, registering the first parameterized data according to the identity of the user and the first parameterized data, and leaving the bottom.
5. The method of claim 1, wherein vertices in the 3D reference face mesh are a fixed number, and the connection relationship between the plurality of vertices is a fixed first connection topology;
Fitting the preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of the corresponding user, wherein the fitting parameters specifically comprise:
And according to the basis vector weight in the fitting parameter, on the basis of keeping the first connection topological relation, adjusting the position coordinates of at least part of vertexes in the 3D reference face grid to obtain 3D parameterized face data of a corresponding user.
6. The method according to claim 1, wherein the predicting, according to the 3D face point cloud, fitting parameters required for 3D face reconstruction through a pre-trained fitting parameter prediction model specifically includes:
Performing key point detection on the 3D face point cloud through a pre-trained face key point detection model so as to determine key point information of the face of the user;
And predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model according to the 3D face point cloud and the key point information.
7. The method of claim 6, wherein prior to generating third parametric data from the first parametric data, the second parametric data, and the timestamp of the second parametric data, the method further comprises:
Determining key point information corresponding to the first parameterized data and the second parameterized data respectively;
Generating respective corresponding second connection topological relations according to the key point information;
determining the similarity between the first parameterized data and the second parameterized data through the second connection topological relation;
And determining that the similarity exceeds a preset threshold value to determine that the 3D face recognition of the user is successful.
8. The method according to claim 1, wherein the determining the weight of the first parametric data and the weight of the second parametric data according to the timestamp of the second parametric data specifically comprises:
determining a timestamp of the first parametric data;
Determining a time difference between a time stamp of the first parametric data and a time stamp of the second parametric data;
Determining one or more designated face areas and period change time corresponding to the designated face areas in the second parameterized data;
Matching the time difference with the period change time to judge whether the position in the period corresponding to the first parameterized data and the second parameterized data accords with a set consistency condition;
if not, determining that the weight of the second parameterized data is lower than that of the first parameterized data;
If yes, relatively up-regulating the weight of the second parameterized data.
9. The method of claim 8, wherein if not, after determining that the second parametric data has a lower weight than the first parametric data, the method further comprises:
determining face change grades corresponding to other face areas except the appointed face area in the second parameterized data;
compensating the weight of the second parameterized data according to the face change level;
and if the compensation degree is greater than a preset threshold value, relatively up-regulating the weight of the second parameterized data, wherein the face change level is inversely related to the compensation degree.
10. The method of claim 8 or 9, the designated face region comprising at least one of: a hair covering area and a foreign object covering area.
11. A face registration foothold device, comprising:
The acquisition module acquires a 3D face image of a user face and determines a 3D face point cloud of the user face according to the 3D face image;
The fitting parameter prediction module predicts fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model according to the 3D face point cloud;
The parameterization module is used for carrying out fitting processing on a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes as first parameterized data, the vertexes have a connection relationship, and face semantics are respectively arranged on the vertexes;
The determining module is used for taking the 3D parameterized face data of the user with the registered bottom as second parameterized data and determining a timestamp of the second parameterized data if the 3D parameterized face data of the user with the registered bottom is registered;
The generation module generates third parameterized data as the face data of the user registration left-over according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data, and specifically includes: and determining the weight of the first parameterized data and the weight of the second parameterized data according to the timestamp of the second parameterized data, and carrying out weighted fusion on the first parameterized data and the second parameterized data according to the weight of the first parameterized data and the weight of the second parameterized data to generate third parameterized data.
12. The apparatus of claim 11, the generation module to determine a timestamp of the first parameterized data;
Determining a time difference between a time stamp of the first parametric data and a time stamp of the second parametric data;
And determining the weight of the second parameterized data and the weight of the first parameterized data according to a negative correlation set between the time difference and the weight of the second parameterized data.
13. The device of claim 12, wherein the generating module performs time scale contraction processing according to the time difference to obtain denominator terms having a set positive correlation with the time difference;
Determining the weight of the second parameterized data in a set negative correlation with the time difference according to the denominator term, wherein the weight of the second parameterized data;
And determining the weight of the first parameterized data according to the weight of the second parameterized data.
14. The apparatus of claim 11, further comprising:
the judging module is used for judging whether the 3D parameterized face data of the user is registered and left according to the identity of the user;
if not, registering the first parameterized data according to the identity of the user and the first parameterized data, and leaving the bottom.
15. The apparatus of claim 11, wherein vertices in the 3D reference face mesh are a fixed number, and the connection relationship between the plurality of vertices is a fixed first connection topology;
and the parameterization module adjusts the position coordinates of at least part of vertexes in the 3D reference face grid on the basis of maintaining the first connection topological relation according to the basis vector weight in the fitting parameter to obtain 3D parameterized face data.
16. The apparatus of claim 11, the fitting parameter prediction module performs keypoint detection on the 3D face point cloud by a pre-trained face keypoint detection model to determine therein keypoint information of the user face;
And predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model according to the 3D face point cloud and the key point information.
17. The apparatus of claim 16, further comprising:
the identification module is used for determining key point information corresponding to the first parameterized data and the second parameterized data respectively;
Generating respective corresponding second connection topological relations according to the key point information;
determining the similarity between the first parameterized data and the second parameterized data through the second connection topological relation;
And determining that the similarity exceeds a preset threshold value to determine that the 3D face recognition of the user is successful.
18. The apparatus of claim 11, the generation module to determine a timestamp of the first parameterized data;
Determining a time difference between a time stamp of the first parametric data and a time stamp of the second parametric data;
Determining one or more designated face areas and period change time corresponding to the designated face areas in the second parameterized data;
Matching the time difference with the period change time to judge whether the position in the period corresponding to the first parameterized data and the second parameterized data accords with a set consistency condition;
if not, determining that the weight of the second parameterized data is lower than that of the first parameterized data;
If yes, relatively up-regulating the weight of the second parameterized data.
19. The apparatus of claim 18, the generation module to determine a face change level corresponding to a face region other than the specified face region in the second parameterized data;
compensating the weight of the second parameterized data according to the face change level;
and if the compensation degree is greater than a preset threshold value, relatively up-regulating the weight of the second parameterized data, wherein the face change level is inversely related to the compensation degree.
20. The apparatus of claim 18 or 19, the designated face region comprising at least one of: a hair covering area and a foreign object covering area.
21. A face registration bottoming device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a 3D face image of a user face, and determining a 3D face point cloud of the user face according to the 3D face image;
according to the 3D face point cloud, predicting fitting parameters required by 3D face reconstruction through a pre-trained fitting parameter prediction model;
Fitting a preset 3D reference face grid according to the fitting parameters to obtain 3D parameterized face data of a corresponding user, wherein the 3D reference face grid comprises a plurality of vertexes which are connected with each other, and each vertex is provided with face semantics respectively;
If the 3D parameterized face data of the user is registered for a reservation, taking the registered 3D parameterized face data of the reserved user as second parameterized data, and determining a time stamp of the second parameterized data;
Generating third parameterized data as face data of the user registration reserved according to the first parameterized data, the second parameterized data and the timestamp of the second parameterized data, wherein the third parameterized data specifically comprises: and determining the weight of the first parameterized data and the weight of the second parameterized data according to the timestamp of the second parameterized data, and carrying out weighted fusion on the first parameterized data and the second parameterized data according to the weight of the first parameterized data and the weight of the second parameterized data to generate third parameterized data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210391081.7A CN114882550B (en) | 2022-04-14 | 2022-04-14 | Face registration bottom-reserving method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210391081.7A CN114882550B (en) | 2022-04-14 | 2022-04-14 | Face registration bottom-reserving method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114882550A CN114882550A (en) | 2022-08-09 |
CN114882550B true CN114882550B (en) | 2024-05-14 |
Family
ID=82668593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210391081.7A Active CN114882550B (en) | 2022-04-14 | 2022-04-14 | Face registration bottom-reserving method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114882550B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598749A (en) * | 2018-11-30 | 2019-04-09 | 腾讯科技(深圳)有限公司 | A kind of method for parameter configuration of three-dimensional face model, device, equipment and medium |
WO2019080578A1 (en) * | 2017-10-26 | 2019-05-02 | 深圳奥比中光科技有限公司 | 3d face identity authentication method and apparatus |
CN110008821A (en) * | 2019-02-02 | 2019-07-12 | 阿里巴巴集团控股有限公司 | A kind of prediction portrait base map more new method and apparatus |
CN110020601A (en) * | 2019-03-06 | 2019-07-16 | 阿里巴巴集团控股有限公司 | A kind of three-dimensional face is kept on file register method, device, terminal and server |
WO2019201027A1 (en) * | 2018-04-18 | 2019-10-24 | 腾讯科技(深圳)有限公司 | Face model processing method and device, nonvolatile computer-readable storage medium and electronic device |
CN110705357A (en) * | 2019-09-02 | 2020-01-17 | 深圳中兴网信科技有限公司 | Face recognition method and face recognition device |
CN111104825A (en) * | 2018-10-26 | 2020-05-05 | 北京陌陌信息技术有限公司 | Face registry updating method, device, equipment and medium |
CN111402403A (en) * | 2020-03-16 | 2020-07-10 | 中国科学技术大学 | High-precision three-dimensional face reconstruction method |
CN111695502A (en) * | 2020-06-11 | 2020-09-22 | 腾讯科技(深圳)有限公司 | Feature updating method and device for face recognition and computer equipment |
CN112668383A (en) * | 2020-07-23 | 2021-04-16 | 深圳市唯特视科技有限公司 | Attendance checking method and system based on face recognition, electronic equipment and storage medium |
CN113469091A (en) * | 2021-07-09 | 2021-10-01 | 北京的卢深视科技有限公司 | Face recognition method, training method, electronic device and storage medium |
CN114170448A (en) * | 2020-08-20 | 2022-03-11 | 魔门塔(苏州)科技有限公司 | Evaluation method and device for visual perception algorithm |
CN114241102A (en) * | 2021-11-11 | 2022-03-25 | 清华大学 | Method and device for reconstruction and editing of facial details based on parametric model |
-
2022
- 2022-04-14 CN CN202210391081.7A patent/CN114882550B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019080578A1 (en) * | 2017-10-26 | 2019-05-02 | 深圳奥比中光科技有限公司 | 3d face identity authentication method and apparatus |
WO2019201027A1 (en) * | 2018-04-18 | 2019-10-24 | 腾讯科技(深圳)有限公司 | Face model processing method and device, nonvolatile computer-readable storage medium and electronic device |
CN111104825A (en) * | 2018-10-26 | 2020-05-05 | 北京陌陌信息技术有限公司 | Face registry updating method, device, equipment and medium |
CN109598749A (en) * | 2018-11-30 | 2019-04-09 | 腾讯科技(深圳)有限公司 | A kind of method for parameter configuration of three-dimensional face model, device, equipment and medium |
CN110008821A (en) * | 2019-02-02 | 2019-07-12 | 阿里巴巴集团控股有限公司 | A kind of prediction portrait base map more new method and apparatus |
CN110020601A (en) * | 2019-03-06 | 2019-07-16 | 阿里巴巴集团控股有限公司 | A kind of three-dimensional face is kept on file register method, device, terminal and server |
CN110705357A (en) * | 2019-09-02 | 2020-01-17 | 深圳中兴网信科技有限公司 | Face recognition method and face recognition device |
CN111402403A (en) * | 2020-03-16 | 2020-07-10 | 中国科学技术大学 | High-precision three-dimensional face reconstruction method |
CN111695502A (en) * | 2020-06-11 | 2020-09-22 | 腾讯科技(深圳)有限公司 | Feature updating method and device for face recognition and computer equipment |
CN112668383A (en) * | 2020-07-23 | 2021-04-16 | 深圳市唯特视科技有限公司 | Attendance checking method and system based on face recognition, electronic equipment and storage medium |
CN114170448A (en) * | 2020-08-20 | 2022-03-11 | 魔门塔(苏州)科技有限公司 | Evaluation method and device for visual perception algorithm |
CN113469091A (en) * | 2021-07-09 | 2021-10-01 | 北京的卢深视科技有限公司 | Face recognition method, training method, electronic device and storage medium |
CN114241102A (en) * | 2021-11-11 | 2022-03-25 | 清华大学 | Method and device for reconstruction and editing of facial details based on parametric model |
Non-Patent Citations (1)
Title |
---|
三维人脸表情获取及重建技术综述;王珊;沈旭昆;赵沁平;;系统仿真学报;20180708(07);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114882550A (en) | 2022-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111598998B (en) | Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium | |
CN112131978B (en) | Video classification method and device, electronic equipment and storage medium | |
KR102442486B1 (en) | 3D model creation method, apparatus, computer device and storage medium | |
CN111768336B (en) | Face image processing method and device, computer equipment and storage medium | |
CN116109798B (en) | Image data processing method, device, equipment and medium | |
CN111310624A (en) | Occlusion recognition method and device, computer equipment and storage medium | |
US20240037852A1 (en) | Method and device for reconstructing three-dimensional faces and storage medium | |
CN113705290A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN103425964A (en) | Image processing apparatus, image processing method, and computer program | |
KR102043626B1 (en) | Deep learning-based virtual plastic surgery device for providing virtual plastic surgery image customers by analyzing big data on before and after image of plurality of person who has experience of a plastic surgery | |
KR20210100592A (en) | Face recognition technology based on heuristic Gaussian cloud transformation | |
CN106156739B (en) | A method for detecting and extracting ears in ID photos based on facial contour analysis | |
CN114973349A (en) | Face image processing method and training method of face image processing model | |
CN111680573B (en) | Face recognition method, device, electronic equipment and storage medium | |
CN117095436A (en) | Intelligent management system and method for enterprise employee information | |
CN106933976B (en) | Method for establishing human body 3D net model and application thereof in 3D fitting | |
CN114882550B (en) | Face registration bottom-reserving method, device and equipment | |
CN118553001B (en) | Texture-controllable three-dimensional fine face reconstruction method and device based on sketch input | |
US20220103891A1 (en) | Live broadcast interaction method and apparatus, live broadcast system and electronic device | |
CN118888083A (en) | Acupoint massage design method | |
CN114511911B (en) | A face recognition method, device and equipment | |
Montazeri et al. | Automatic extraction of eye field from a gray intensity image using intensity filtering and hybrid projection function | |
CN113553890A (en) | Multi-modal biological feature fusion method and device, storage medium and equipment | |
CN110163049A (en) | A kind of face character prediction technique, device and storage medium | |
CN113706399A (en) | Face image beautifying method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |