CN113673374B - Face recognition method, device and equipment - Google Patents
Face recognition method, device and equipment Download PDFInfo
- Publication number
- CN113673374B CN113673374B CN202110886912.3A CN202110886912A CN113673374B CN 113673374 B CN113673374 B CN 113673374B CN 202110886912 A CN202110886912 A CN 202110886912A CN 113673374 B CN113673374 B CN 113673374B
- Authority
- CN
- China
- Prior art keywords
- face recognition
- user
- image
- dimensional
- dimensional image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 95
- 238000012545 processing Methods 0.000 claims abstract description 76
- 238000000605 extraction Methods 0.000 claims description 219
- 238000004422 calculation algorithm Methods 0.000 claims description 67
- 238000012549 training Methods 0.000 claims description 61
- 230000001815 facial effect Effects 0.000 claims description 42
- 238000003860 storage Methods 0.000 claims description 30
- 238000012795 verification Methods 0.000 claims description 28
- 238000010801 machine learning Methods 0.000 claims description 27
- 230000000977 initiatory effect Effects 0.000 claims description 16
- 238000005520 cutting process Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 14
- 230000007246 mechanism Effects 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000006872 improvement Effects 0.000 description 8
- 230000009471 action Effects 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 208000010415 Low Vision Diseases 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004303 low vision Effects 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005728 strengthening Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Bioethics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computer Hardware Design (AREA)
- Medical Informatics (AREA)
- Computer Security & Cryptography (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the specification discloses a face recognition method, a device and equipment, wherein the method comprises the following steps: acquiring a pre-stored face recognition reference image under the condition that a face recognition request of a target user is acquired, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image; extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image; and carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for face recognition.
Background
At present, with the continuous development of facial recognition technology, strengthening protection of user face privacy is becoming a focus of increasing public attention. Regulations relating to face privacy protection are also successively released, and are used for standardizing the collection and retention modes of face data of users by face recognition technology developers.
The face recognition system based on the two-dimensional face image needs to keep a large number of two-dimensional face images of users as a base, and the two-dimensional face images contain a large number of privacy information of the users, so that the privacy protection of the two-dimensional face data is always a serious challenge faced by the two-dimensional face recognition technology. In addition, since three-dimensional face data is almost illegible visually, the face recognition system based on the three-dimensional face data has great advantages over two-dimensional face recognition in the aspect of privacy protection of a reserved base, the three-dimensional face recognition system requires that an image acquisition device in the system is a depth camera component, the equipment cost is relatively high, the system cannot be adapted to the image acquisition devices of a large number of single two-dimensional camera components at present, and the expandability of a face recognition mechanism is not high. Based on this, it is necessary to provide a face recognition mechanism with higher scalability.
Disclosure of Invention
It is an object of embodiments of the present description to provide a face recognition mechanism that is more scalable.
In order to achieve the above technical solution, the embodiments of the present specification are implemented as follows:
the embodiment of the specification provides a face recognition method, which comprises the following steps: in the case of acquiring a face recognition request of a target user, acquiring a pre-stored face recognition reference image, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image. And extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image. And carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request.
The embodiment of the specification provides a face recognition method applied to a blockchain system, which comprises the following steps: and receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user. The face recognition method comprises the steps of obtaining a pre-stored face recognition reference image from the blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request. And extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image. And carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image through the first intelligent contract to obtain a recognition result corresponding to the face recognition request.
A facial recognition apparatus provided in an embodiment of the present specification, the apparatus including: the image acquisition module acquires a pre-stored face recognition reference image under the condition that a face recognition request of a target user is acquired, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image. And the first feature extraction module is used for carrying out feature extraction on the two-dimensional image to obtain image features corresponding to the two-dimensional image, and carrying out feature extraction on the face recognition reference image to obtain the image features corresponding to the face recognition reference image. And the face recognition module is used for carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image, so as to obtain a recognition result corresponding to the face recognition request.
A facial recognition apparatus provided in an embodiment of the present specification, the apparatus including: the face recognition system comprises a request module, a face recognition module and a face recognition module, wherein the request module receives a face recognition request of a target user sent by a terminal device, and the face recognition request comprises a shot two-dimensional image comprising the face of the target user. The image acquisition module acquires a pre-stored face recognition reference image from the device based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request. And the feature extraction module is used for carrying out feature extraction on the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and carrying out feature extraction on the face recognition reference image based on the first intelligent contract to obtain the image features corresponding to the face recognition reference image. And the face recognition module is used for carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image through the first intelligent contract, so as to obtain a recognition result corresponding to the face recognition request.
The embodiment of the present specification provides a face recognition apparatus, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: in the case of acquiring a face recognition request of a target user, acquiring a pre-stored face recognition reference image, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image. And extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image. And carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request.
The embodiment of the present specification provides a face recognition device, which is a device in a blockchain system, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: and receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user. The face recognition method comprises the steps of obtaining a pre-stored face recognition reference image from the blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request. And extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image. And carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image through the first intelligent contract to obtain a recognition result corresponding to the face recognition request.
The present specification embodiments also provide a storage medium for storing computer executable instructions that when executed implement the following: in the case of acquiring a face recognition request of a target user, acquiring a pre-stored face recognition reference image, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image. And extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image. And carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request.
The present specification embodiments also provide a storage medium for storing computer executable instructions that when executed implement the following: and receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user. The face recognition method comprises the steps of acquiring a pre-stored face recognition reference image from a blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request. And extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image. And carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image through the first intelligent contract to obtain a recognition result corresponding to the face recognition request.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1A is a diagram illustrating an exemplary embodiment of a face recognition method according to the present disclosure;
FIG. 1B is a schematic diagram of a face recognition process according to the present disclosure;
FIG. 2 is a schematic diagram of a face recognition system according to the present disclosure;
FIG. 3 is a schematic diagram of another face recognition process according to the present disclosure;
FIG. 4 is a schematic diagram of a model training process of the present disclosure;
FIG. 5A is a diagram of another face recognition method embodiment of the present disclosure;
FIG. 5B is a schematic diagram illustrating a face recognition process according to another embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a face recognition process according to another embodiment of the present disclosure;
FIG. 7 is a face recognition device embodiment of the present disclosure;
FIG. 8 is another face recognition device embodiment of the present disclosure;
Fig. 9 is a face recognition apparatus embodiment of the present specification.
Detailed Description
The embodiment of the specification provides a face recognition method, a face recognition device and face recognition equipment.
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
Example 1
As shown in fig. 1A and fig. 1B, an embodiment of the present disclosure provides a face recognition method, where an execution subject of the method may be a terminal device or a server, where the terminal device may be a mobile phone, a tablet computer, a personal computer, or the like. The server may be a server of a certain service (such as a service for conducting a transaction or a financial service, etc.), specifically, the server may be a server of a payment service, a server of a service related to finance or instant messaging, etc., or may be a server for performing face recognition for a certain service. In the embodiment of the present disclosure, the execution body is taken as the server as an example for detailed description, and for the case that the execution body is a terminal device, the following related content may be referred to, and will not be described herein. The method specifically comprises the following steps:
In step S102, in the case where a face recognition request of the target user is acquired, a face recognition reference image stored in advance is acquired, the face recognition request including a captured two-dimensional image including the face of the target user, the face recognition reference image being a three-dimensional image.
The target user may be any user who needs to perform facial recognition. The face recognition reference image may be an accurate face image of the user provided during the process of registering the face recognition mechanism by the user, the face recognition reference image may include one face image or may include a plurality of different face images, the face recognition reference image may be a three-dimensional image, that is, the face recognition reference image may be a depth image, or the like. The two-dimensional image may be a two-dimensional RGB image or the like, which can obtain images of a plurality of different colors by changing three color channels of red (R), green (G), blue (B) and overlapping them with each other.
In practice, with the development of face recognition technology, strengthening protection of the privacy of the face of a user is now becoming a focus of increasing public attention. Regulations relating to face privacy protection are also successively released, and are used for standardizing the collection and retention modes of face data of users by face recognition technology developers. The face recognition system based on the two-dimensional face image needs to keep a large amount of two-dimensional face images of users as a reserved base, the two-dimensional face images contain a large amount of privacy information of the users, privacy protection for the two-dimensional face data is always a serious challenge faced by the two-dimensional face recognition technology, and once the reserved base leaks or if the two-dimensional face data is forbidden to be stored in the aspect of monitoring compliance, serious attack is caused on the two-dimensional face recognition mechanism. In addition, since three-dimensional face data is almost illegible visually, the face recognition system based on the three-dimensional face data has great advantages over two-dimensional face recognition in the aspect of privacy protection of a reserved base, the three-dimensional face recognition system requires that an image acquisition device in the system is a depth camera component, the equipment cost is relatively high, the system cannot be adapted to the image acquisition devices of a large number of single two-dimensional camera components at present, and the expandability of a face recognition mechanism is not high. Based on this, it is necessary to provide a face recognition mechanism with higher scalability. The embodiment of the specification provides an achievable scheme, which specifically may include the following:
Based on the above, a three-dimensional image can be set in a still base of the face recognition mechanism without setting a two-dimensional image for face recognition processing, and since the three-dimensional image is almost illegally recognized visually, the degree of protection of user privacy data can be greatly improved. The depth camera assembly can be preset, different users can acquire three-dimensional images comprising the faces of the users through the depth camera assembly, then the acquired three-dimensional images comprising the faces of the different users can be sent to the server, the server can receive the three-dimensional images of the different users and can correspondingly store the different three-dimensional images and corresponding user identifications, and the data can be correspondingly stored in the reserved base. In addition, in order to ensure the compatibility of most face acquisition components, the traditional two-dimensional face acquisition components can be used, so that the face recognition system can be seamlessly accessed to common terminal equipment and the traditional two-dimensional face acquisition components, and the high compatibility of the face recognition system at the face acquisition end is ensured.
As shown in fig. 2, when face recognition is required for a user (i.e., a target user), in order to ensure that an acquired image is a current real image of the real user (instead of an image of the user that has been photographed in advance, etc.), an image acquisition component (such as a two-dimensional camera component installed in a designated area (such as a camera in a certain office, a camera of a settlement machine in a supermarket, etc.) or a terminal device used by the target user (such as a mobile phone, etc.) corresponding to the server may acquire a two-dimensional image, and face detection and designated action detection may be performed on the target user through the acquired image (the designated action detection may be a method of determining the real physiological characteristics of the object in some authentication scenes, in face recognition applications, the designated action detection may be performed by a combined action of blinking, mouth opening, head shaking, nodding, etc.), a technique of face key point positioning and face tracking is used to verify whether the user is a real living body operation, and after the designated action detection is passed, the two-dimensional image including the face of the target user may be photographed. The image acquisition component may send the acquired two-dimensional image to a server, which may receive the two-dimensional image and may obtain a pre-stored face recognition reference image (i.e., a three-dimensional image) in a still base.
In step S104, feature extraction is performed on the two-dimensional image to obtain an image feature corresponding to the two-dimensional image, and feature extraction is performed on the face recognition reference image to obtain an image feature corresponding to the face recognition reference image.
In the implementation, a feature extraction algorithm for extracting features from a two-dimensional image (for convenience of subsequent expression, the feature extraction algorithm may be denoted as a first feature extraction method) may be preset, and a feature extraction algorithm for extracting features from a three-dimensional image (for convenience of subsequent expression, the feature extraction algorithm may be denoted as a second feature extraction method) may be preset. After the two-dimensional image and the three-dimensional image are obtained in the mode, the first feature extraction algorithm can be used for extracting features of the shot two-dimensional image to obtain image features corresponding to the two-dimensional image, and the second feature extraction algorithm can be used for extracting features of the pre-stored three-dimensional image to obtain image features corresponding to the three-dimensional image, namely the image features corresponding to the face recognition reference image.
In step S106, a face recognition process is performed on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image, so as to obtain a recognition result corresponding to the face recognition request.
In implementation, after the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image are obtained in the above manner, similarity calculation can be performed on the image features corresponding to the two-dimensional image and the image features corresponding to each face recognition reference image in the left base respectively, finally, the similarity between the image features corresponding to the two-dimensional image and the image features corresponding to each face recognition reference image in the left base can be obtained, the face recognition reference image corresponding to the largest similarity can be selected from the image features, user information corresponding to the selected face recognition reference image can be obtained, and the identity of the target user can be determined based on the user information, so that face recognition processing of the target user is realized, and the recognition result corresponding to the face recognition request is obtained.
It should be noted that, the similarity calculation manner between the image features of the two-dimensional image and the image features of the three-dimensional image may be various, for example, a similarity comparison mechanism may be set up for a portion where there is a significant difference between the two-dimensional image and the three-dimensional image or a portion where there is a similarity between the two-dimensional image and the three-dimensional image, or a corresponding model may be trained in advance, and the face recognition processing may be performed on the target user by using the trained model and the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image, so as to obtain a recognition result corresponding to the face recognition request, which may be specifically set according to the actual situation.
The embodiment of the specification provides a face recognition method, under the condition that a face recognition request of a target user is acquired, a prestored face recognition reference image is acquired, the face recognition request comprises a shot two-dimensional image comprising the face of the target user, the face recognition reference image is a three-dimensional image, then, feature extraction is carried out on the two-dimensional image to obtain image features corresponding to the two-dimensional image, feature extraction is carried out on the face recognition reference image to obtain image features corresponding to the face recognition reference image, finally, face recognition processing is carried out on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request, in this way, the face feature mutual comparison between the two-dimensional face image and the three-dimensional face image can be realized, so that the user identity corresponding to the two-dimensional image is positioned in a reserved base of the three-dimensional image, cross-mode face recognition is realized, and on the premise that the two-dimensional face image of the user is not reserved, verification of the user identity is realized through face recognition, the safety of the system and the privacy protection of the user are greatly improved, in addition, the high-dimensional face recognition information and the high-expansibility of face recognition device and the three-dimensional face information are guaranteed, the three-dimensional image has high-dimensional image safety and the low-dimensional image storage performance, and the visual recognition information is stored.
Example two
As shown in fig. 3, the embodiment of the present disclosure provides a face recognition method, where an execution subject of the method may be a terminal device or a server, where the terminal device may be a mobile phone, a tablet computer, a personal computer, or the like. The server may be a server of a certain service (such as a service for conducting a transaction or a financial service, etc.), specifically, the server may be a server of a payment service, a server of a service related to finance or instant messaging, etc., or may be a server for performing face recognition for a certain service. In the embodiment of the present disclosure, the execution body is taken as the server as an example for detailed description, and for the case that the execution body is a terminal device, the following related content may be referred to, and will not be described herein. The method specifically comprises the following steps:
in step S302, a model architecture of a cross-modality face recognition model is constructed based on a preset machine learning algorithm.
The machine learning algorithm may include a plurality of deep learning algorithms, for example, a neural network algorithm, a multi-layer perceptron, etc., and the corresponding model may be a neural network model, a multi-layer perceptron, etc., which may be specifically set according to the actual situation, and the embodiment of the present disclosure is not limited thereto. The model architecture of the cross-modal face recognition model may be a model comprising one or more different parameters to be determined, and if the parameter values of the parameters to be determined can be determined, the final cross-modal face recognition model is a complete model.
In implementation, an algorithm of the cross-modal face recognition model to be built can be preset according to actual conditions, and in this embodiment, a machine learning algorithm can be selected as an algorithm for building the cross-modal face recognition model, that is, a model framework of the cross-modal face recognition model can be built through the machine learning algorithm.
In step S304, first three-dimensional image-based three-dimensional data for a first user and second three-dimensional image-based three-dimensional data for the first user are acquired, the first three-dimensional data including: the second triplet data comprises: a first three-dimensional image of a first user, a second three-dimensional image of a user having the same user identification as the first user, and a third three-dimensional image of a user having a different user identification than the first user.
In the implementation, firstly, in the aspect of the composition of training data, a triplet mode can be selected, a pair of two-dimensional+three-dimensional face images of a user can be randomly selected as an anchor point A aiming at each triplet data, then, a pair of two-dimensional+three-dimensional face images of a user with the same ID (namely user identification) as P can be selected, then, a pair of two-dimensional+three-dimensional face images of a user with different IDs (namely user identification) as N can be selected, and the three can form corresponding triples, namely, triples data (A, P, N) aiming at the two-dimensional images and triples data (A, P, N) aiming at the three-dimensional images.
In step S306, feature extraction is performed on the first triplet data and the second triplet data, so as to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data.
In implementation, a feature extraction algorithm may be preset, and feature extraction may be performed on A, P and N in triplet data (a, P, N) of the two-dimensional image by the feature extraction algorithm, so as to obtain an image feature set corresponding to the first triplet data, and feature extraction may be performed on A, P and N in triplet data (a, P, N) of the three-dimensional image by the feature extraction algorithm, so as to obtain an image feature set corresponding to the second triplet data.
The specific processing manner of step S306 may be varied, and an alternative processing manner is provided below, and specific reference may be made to the following processing of step A2 and step A4.
In step A2, feature extraction is performed on the first triplet data through a first feature extraction model, so as to obtain an image feature set corresponding to the first triplet data.
In implementation, the model architecture of the first feature extraction model may be constructed based on a machine learning algorithm, where the machine learning algorithm may include a plurality of types, for example, a perceptron, a neural network algorithm, and the like, and may be specifically set according to actual situations, which is not limited in the embodiments of the present disclosure. The algorithm of the first feature extraction model to be built can be preset according to the actual situation, and in this embodiment, a machine learning algorithm can be selected as the algorithm for building the first feature extraction model, that is, the model architecture of the first feature extraction model can be built through the machine learning algorithm. Then, the triplet data of a plurality of different two-dimensional images can be constructed based on the construction mode of the first triplet data, the triplet data of the plurality of different two-dimensional images can be used as training data, and the training data can be used for carrying out model training on the first feature extraction model to obtain a trained first feature extraction model. After the first triplet data is obtained in the above manner, the first triplet data can be input into the first feature extraction model, and an image feature set corresponding to the first triplet data is obtained.
In step A4, feature extraction is performed on the second triplet data through a second feature extraction model, and an image feature set corresponding to the second triplet data is obtained.
In implementation, the model architecture of the second feature extraction model may be constructed based on a machine learning algorithm, where the machine learning algorithm may include a plurality of types, for example, a perceptron, a neural network algorithm, and the like, and may be specifically set according to actual situations, which is not limited in the embodiments of the present disclosure. The algorithm of the second feature extraction model to be built can be preset according to the actual situation, and in this embodiment, a machine learning algorithm can be selected as the algorithm for building the second feature extraction model, that is, the model architecture of the second feature extraction model can be built through the machine learning algorithm. Then, the triple data of a plurality of different three-dimensional images can be constructed based on the construction mode of the second triple data, the triple data of the plurality of different three-dimensional images can be used as training data, and the training data can be used for carrying out model training on the second feature extraction model to obtain a trained second feature extraction model. After the second triplet data is obtained in the above manner, the second triplet data can be input into the second feature extraction model, and an image feature set corresponding to the second triplet data is obtained.
In step S308, based on the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data, the similarity between the image features corresponding to the two-dimensional image and the image features corresponding to the three-dimensional image of the same user identifier is pulled, and the similarity between the image features corresponding to the two-dimensional image and the image features corresponding to the three-dimensional image of different user identifiers is pulled for training purposes, and the cross-modal face recognition model is subjected to supervised training, so as to obtain the trained cross-modal face recognition model.
In implementation, as shown in fig. 4, after the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data are obtained in the above manner, based on the relationship between the triplet data formed by the two-dimensional image A, P, N and the triplet data formed by the three-dimensional image A, P, N, the similarity between the two-dimensional image A, P, N and the three-dimensional image A, P, N is higher (or the similarity is greater than the first predetermined threshold value, etc.), the similarity between the two-dimensional image A, P and the three-dimensional image N is lower (or the similarity is less than the second predetermined threshold value, etc.), and accordingly, the similarity between the two-dimensional image N and the three-dimensional image A, P is lower (or the similarity is less than the second predetermined threshold value, etc.), for this purpose, the cross-mode face recognition model may be subjected to supervised training, and the trained cross-mode face recognition model may be obtained.
In step S310, the first feature extraction model and the second feature extraction model are respectively supervised and trained by a back propagation method, so as to obtain a trained first feature extraction model and a trained second feature extraction model.
The supervised training may be a process of training the model by sample data carrying labels. The back propagation mode may be realized by a back propagation algorithm, and the back propagation algorithm is mainly formed by repeating and iterating two links (excitation propagation and weight update) until the response to the input reaches a predetermined target range, which may be specifically set according to practical situations, and the embodiment of the present disclosure is not limited to this.
In step S312, in the case where the face recognition request of the target user is acquired, a face recognition reference image stored in advance is acquired, the face recognition request including a captured two-dimensional image including the face of the target user, the face recognition reference image being a three-dimensional image.
In order to improve the processing efficiency of the data, the captured two-dimensional image may be preprocessed, so that only the required image may be retained, and in particular, see the processing of step S314 and step S316 described below.
In step S314, the area where the face of the target user is located in the above two-dimensional image is detected.
In implementation, the detection of the facial area in the image may include a plurality of different manners, for example, a corresponding model (such as a convolutional neural network model, etc.) or an algorithm (such as a classification algorithm, etc.) or a plurality of different algorithm combinations may be trained in advance, and the detection of the facial area in the image may be performed by using the model or the algorithm, etc., so as to obtain the area where the face of the target user is located in the two-dimensional image.
In step S316, the two-dimensional image is cropped based on the detected region of the face of the target user in the two-dimensional image, and a cropped two-dimensional image is obtained.
In the implementation, the region where the face of the target user detected in the two-dimensional image is located can be reserved, other regions except the reserved region in the two-dimensional image are cut, and finally the cut two-dimensional image can be obtained.
In step S318, the cut two-dimensional image is input into a pre-trained first feature extraction model, so as to perform feature extraction on the two-dimensional image, to obtain an image feature corresponding to the two-dimensional image, where the first feature extraction model is obtained by performing model training based on a plurality of different historical two-dimensional images.
In implementation, an algorithm of the first feature extraction model to be built may be preset according to actual situations, and in this embodiment, a machine learning algorithm may be selected as an algorithm for building the first feature extraction model, that is, a model architecture of the first feature extraction model may be built by using the machine learning algorithm, where the model architecture may include one or more different parameter values of parameters to be determined. The machine learning algorithm may include various machine learning algorithms, such as a deep learning algorithm, and the like, specifically, a neural network algorithm, a multi-layer perceptron, and the like, and the corresponding model may be a neural network model, a multi-layer perceptron, and the like, which may be specifically set according to actual situations, and the embodiment of the present disclosure is not limited thereto. Then, a plurality of different historical two-dimensional images can be obtained through a plurality of different modes, and the first feature extraction model is subjected to model training through the plurality of different historical two-dimensional images, so that the trained first feature extraction model can be finally obtained.
After the two-dimensional image after cutting is obtained in the mode, the two-dimensional image after cutting can be input into the first characteristic extraction model after training, and the characteristic extraction is carried out on the two-dimensional image after cutting through the first characteristic extraction model, so that the image characteristic corresponding to the two-dimensional image is obtained.
In step S320, the face recognition reference image is input into a pre-trained second feature extraction model, so as to perform feature extraction on the face recognition reference image, and obtain image features corresponding to the face recognition reference image, where the second feature extraction model is obtained by performing model training based on a plurality of different historical three-dimensional images.
In implementation, an algorithm of the second feature extraction model to be built may be preset according to actual situations, and in this embodiment, a machine learning algorithm may be selected as an algorithm for building the second feature extraction model, that is, a model architecture of the second feature extraction model may be built by using the machine learning algorithm, where the model architecture may include one or more different parameter values of parameters to be determined. The machine learning algorithm may include various machine learning algorithms, such as a deep learning algorithm, and the like, specifically, a neural network algorithm, a multi-layer perceptron, and the like, and the corresponding model may be a neural network model, a multi-layer perceptron, and the like, which may be specifically set according to actual situations, and the embodiment of the present disclosure is not limited thereto. Then, a plurality of different historical three-dimensional images can be obtained through a plurality of different modes, and the second feature extraction model is subjected to model training through the plurality of different historical three-dimensional images, so that a trained second feature extraction model can be finally obtained.
After the face recognition reference image is obtained in the above manner, the face recognition reference image can be input into the trained second feature extraction model, and feature extraction is performed on the face recognition reference image through the second feature extraction model, so that image features corresponding to the face recognition reference image are obtained.
In step S322, the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image are input into a pre-trained cross-mode face recognition model, so as to perform face recognition processing on the target user, and a recognition result corresponding to the face recognition request is obtained.
In implementation, the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image may be input into a pre-trained cross-mode face recognition model, so that the image features corresponding to the two-dimensional image may be respectively compared with the image features corresponding to the face recognition reference image through the cross-mode face recognition model, and through the comparison result, the identity of the target user may be finally determined, that is, the recognition result corresponding to the face recognition request may be obtained. Therefore, through the training process and the supervision mechanism, the feature mutual ratio between the two-dimensional facial image and the three-dimensional facial image can be realized, so that the user identity corresponding to the uploaded two-dimensional image is positioned from the reserved base of the three-dimensional facial image, and the cross-mode facial recognition is realized.
The embodiment of the specification provides a face recognition method, under the condition that a face recognition request of a target user is acquired, a prestored face recognition reference image is acquired, the face recognition request comprises a shot two-dimensional image comprising the face of the target user, the face recognition reference image is a three-dimensional image, then, feature extraction is carried out on the two-dimensional image to obtain image features corresponding to the two-dimensional image, feature extraction is carried out on the face recognition reference image to obtain image features corresponding to the face recognition reference image, finally, face recognition processing is carried out on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request, in this way, the face feature mutual comparison between the two-dimensional face image and the three-dimensional face image can be realized, so that the user identity corresponding to the two-dimensional image is positioned in a reserved base of the three-dimensional image, cross-mode face recognition is realized, and on the premise that the two-dimensional face image of the user is not reserved, verification of the user identity is realized through face recognition, the safety of the system and the privacy protection of the user are greatly improved, in addition, the high-dimensional face recognition information and the high-expansibility of face recognition device and the three-dimensional face information are guaranteed, the three-dimensional image has high-dimensional image safety and the low-dimensional image storage performance, and the visual recognition information is stored.
In addition, through a training process and a supervision mechanism, the mutual facial feature ratio between the two-dimensional facial image and the three-dimensional facial image can be realized, so that the user identity corresponding to the two-dimensional image is positioned from a reserved base of the three-dimensional image, and the cross-modal facial recognition is realized.
Example III
As shown in fig. 5A and fig. 5B, the embodiment of the present disclosure provides a face recognition method, where an execution subject of the method may be a blockchain system, and the blockchain system may be composed of a terminal device or a server, where the terminal device may be a mobile terminal device such as a mobile phone, a tablet computer, or a device such as a personal computer. The server may be a single server, or may be a server cluster formed by a plurality of servers. The method specifically comprises the following steps:
in step S502, a face recognition request of a target user transmitted by a terminal device is received, the face recognition request including a captured two-dimensional image including a face of the target user.
The terminal device may be a terminal device used by a target user, and the terminal device may be a device such as a mobile phone, a tablet computer, a personal computer, or a machine for performing face recognition.
In step S504, a pre-stored face recognition reference image is acquired from the blockchain system based on a pre-deployed first smart contract, the face recognition reference image being a three-dimensional image, the first smart contract being used to trigger a face recognition process for a user who initiates a face recognition request.
The first intelligent contract is provided with a rule for carrying out face recognition processing on a user initiating a face recognition request, wherein the rule can comprise one rule or a plurality of rules.
In implementation, a first smart contract may be built in advance based on a face recognition process, and the built first smart contract may be deployed in the blockchain system, so that a face recognition process is triggered by the first smart contract for a user initiating a face recognition request. In order to protect privacy data such as face images of users and to prevent the privacy data of users from being tampered with, face recognition reference images (three-dimensional images) of different users may be stored in a blockchain system. After the blockchain system receives the face recognition request of the target user, the first intelligent contract can be called, and the face recognition processing of the user initiating the face recognition request is triggered through the corresponding rule set in the first intelligent contract.
In step S506, feature extraction is performed on the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and feature extraction is performed on the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image.
In implementation, a rule or algorithm for extracting features of the image may be set in the first intelligent contract, feature extraction may be performed on the two-dimensional image based on the rule or algorithm to obtain image features corresponding to the two-dimensional image, and correspondingly, feature extraction may be performed on the face recognition reference image based on the rule or algorithm to obtain image features corresponding to the face recognition reference image.
In step S508, the face recognition processing is performed on the target user based on the image feature corresponding to the two-dimensional image and the image feature corresponding to the face recognition reference image by the first smart contract, so as to obtain the recognition result corresponding to the face recognition request.
In implementation, a rule or algorithm for performing face recognition on the user based on the two-dimensional image and the three-dimensional image may be set in the first intelligent contract, and the face recognition processing may be performed on the target user based on the image feature corresponding to the two-dimensional image and the image feature corresponding to the face recognition reference image by using the rule or algorithm, so as to obtain a recognition result corresponding to the face recognition request.
The embodiment of the specification provides a face recognition method, which receives a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, a pre-stored face recognition reference image is obtained from a blockchain system based on a pre-deployed first intelligent contract, the face recognition reference image is a three-dimensional image, the first intelligent contract is used for triggering the face recognition processing of the user initiating the face recognition request, the feature extraction is carried out on the two-dimensional image based on the first intelligent contract, the image feature corresponding to the two-dimensional image is obtained, the feature extraction is carried out on the face recognition reference image based on the first intelligent contract, the image feature corresponding to the face recognition reference image is obtained through the first intelligent contract, the face recognition processing is carried out on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain the recognition result corresponding to the face recognition request, thus, the mutual ratio of the face features between the two-dimensional face image and the three-dimensional face image can be realized through the blockchain system, thereby realizing the positioning of the user identity corresponding to the two-dimensional image from the reserved base of the three-dimensional image, realizing the cross-modal face recognition, realizing the verification of the user identity through the face recognition on the premise of not reserving the two-dimensional face image of the user as the reserved base, greatly improving the safety of the system and the protection of the privacy of the user, ensuring the high compatibility and expansibility of the acquisition end equipment, storing the database of the reserved base information of the three-dimensional face image of the user, having the characteristics of low vision identification, high privacy safety and the like, thereby further protecting the user's private data.
Example IV
As shown in fig. 6, the embodiment of the present disclosure provides a face recognition method, where an execution subject of the method may be a blockchain system, and the blockchain system may be composed of a terminal device or a server, where the terminal device may be a mobile terminal device such as a mobile phone, a tablet computer, or a device such as a personal computer. The server may be a single server, or may be a server cluster formed by a plurality of servers. The method specifically comprises the following steps:
in step S602, a preset machine learning algorithm is obtained based on the pre-deployed second intelligent contract, and a model architecture of the cross-modal face recognition model is constructed based on the preset machine learning algorithm.
In step S604, two-dimensional image-based first triplet data for the first user and three-dimensional image-based second triplet data for the first user are acquired based on the second smart contract, the first triplet data including: the second triplet data comprises: a first three-dimensional image of a first user, a second three-dimensional image of a user having the same user identification as the first user, and a third three-dimensional image of a user having a different user identification than the first user.
In step S606, feature extraction is performed on the first triplet data and the second triplet data based on the second intelligent contract, so as to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data.
The specific processing manner of step S606 may be varied, and an alternative processing manner is provided below, and specific reference may be made to the following processing of step A2 and step A4.
In step A2, a first feature extraction model trained in advance is obtained based on the second intelligent contract, and feature extraction is performed on the first triplet data through the first feature extraction model, so that an image feature set corresponding to the first triplet data is obtained.
In step A4, a pre-trained second feature extraction model is obtained based on the second intelligent contract, and feature extraction is performed on the second triplet data through the second feature extraction model, so that an image feature set corresponding to the second triplet data is obtained.
In step S608, based on the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data, the similarity between the image feature corresponding to the two-dimensional image and the image feature corresponding to the three-dimensional image of the same user identifier is pulled, and the similarity between the image feature corresponding to the two-dimensional image and the image feature corresponding to the three-dimensional image of different user identifiers is pulled for training purposes, so as to supervise and train the cross-modal face recognition model, and obtain the trained cross-modal face recognition model.
It should be noted that, the above-mentioned training of the cross-modal face recognition model may be performed in a blockchain system, and in practical applications, considering that the cross-modal face recognition model often needs to be updated irregularly, the above-mentioned processing may also be performed in other devices, where the above-mentioned processing in steps S602 to S608 may include: constructing a model framework of a cross-modal face recognition model based on a preset machine learning algorithm; acquiring first three-dimensional image-based triad data for a first user and second three-dimensional image-based triad data for the first user, the first three-dimensional image-based triad data comprising: the second triplet data comprises: a first three-dimensional image of a first user, a second three-dimensional image of a user having the same user identification as the first user, and a third three-dimensional image of a user having a different user identification than the first user; respectively extracting features of the first triplet data and the second triplet data to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data; based on the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data, the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of the same user identification is pulled, and the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of different user identifications is pulled for training purposes, the cross-modal face recognition model is supervised and trained, and the trained cross-modal face recognition model is obtained. The processing for extracting the features of the first triplet data and the second triplet data to obtain the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data may include: acquiring a pre-trained first feature extraction model, and carrying out feature extraction on first triplet data through the first feature extraction model to obtain an image feature set corresponding to the first triplet data; and acquiring a pre-trained second feature extraction model, and carrying out feature extraction on the second triplet data through the second feature extraction model to obtain an image feature set corresponding to the second triplet data. The trained cross-modal facial recognition model may be stored in a device.
Correspondingly, the block chain system can store the storage address information of the trained cross-modal face recognition model, after receiving the face recognition request, the storage address information can be obtained from the block chain system, the cross-modal face recognition model can be obtained based on the storage address information, and the cross-modal face recognition model can be used for carrying out face recognition processing on the user.
In step S610, based on the second smart contract, the first feature extraction model and the second feature extraction model are respectively supervised and trained by a back propagation method, so as to obtain a trained first feature extraction model and a trained second feature extraction model.
In step S612, a face recognition request of the target user transmitted by the terminal device is received, the face recognition request including a captured two-dimensional image including the face of the target user.
In step S614, a pre-stored face recognition reference image, which is a three-dimensional image, is acquired from the blockchain system based on a pre-deployed first smart contract for triggering a face recognition process for a user who initiates a face recognition request.
In step S616, an image detection rule is obtained based on the first intelligent contract, an area where the face of the target user is located in the two-dimensional image is detected based on the image detection rule, and the two-dimensional image is cut based on the detected area where the face of the target user is located in the two-dimensional image, so as to obtain a cut two-dimensional image.
In step S618, a first feature extraction model trained in advance is obtained based on the first intelligent contract, and the cut two-dimensional image is input into the first feature extraction model to perform feature extraction on the two-dimensional image, so as to obtain an image feature corresponding to the two-dimensional image, wherein the first feature extraction model is obtained by performing model training based on a plurality of different historical two-dimensional images.
In an actual application, the face recognition request may include a verifiable statement of the two-dimensional image, and the process of extracting features of the two-dimensional image based on the first smart contract may include: verifying the validity of the verifiable statement; and if the verification result is valid, extracting the features of the two-dimensional image based on the first intelligent contract to obtain the image features corresponding to the two-dimensional image.
Wherein the verifiable statement may be a piece of normative information describing certain attributes that an individual, organization, etc. entity has, the verifiable statement may implement evidence-based trust, and the other entity may be informed that certain attributes of the current entity are trusted by the verifiable statement. The verifiable statement may include a plurality of different fields and corresponding field values, e.g., a field is a user identifier, a corresponding field value may be a user a, a field is a time of generation of the two-dimensional image, a corresponding field value may be 2020, 1 month, 1 day, etc.
In implementation, after the blockchain system acquires the face recognition request, the verifiable statement can be verified first to judge whether the verifiable statement is valid, and under the condition that the verifiable statement is determined to be valid, corresponding processing is performed based on the verifiable statement, so that the safety of data processing is further ensured. Specifically, verifying the verifiable statement may include various manners, for example, a field value included in the verifiable statement may be obtained, and calculation may be performed by a predetermined algorithm (for example, a hash value of the field value included in the verifiable statement may be calculated by a hash algorithm, etc.), to obtain a corresponding calculation result. The verifiable statement also comprises the reference value of the calculation result, the obtained calculation result can be compared with the reference value in the verifiable statement, if the calculation result and the reference value are the same, the verification is passed, the verification statement is valid, and if the calculation result and the reference value are different, the verification is failed, and the verification statement is invalid.
In addition to the above manner, various manners may be included, for example, after the verifiable declaration includes the verifiable declaration, the verification value of the verifiable declaration may be determined through a predetermined verification algorithm, then the calculated verification value may be compared with the verification value in the verifiable declaration, if the calculated verification value and the calculated verification value are the same, the verification may be valid, if the calculated verification value and the calculated verification value are not the same, the verification may fail, the verification may be invalid, and the like. In practical applications, the manner of verifying the validity of the verifiable statement includes not only the two manners, but also other various realizable manners, which can be specifically set according to practical situations, and the embodiment of the present disclosure is not limited to this.
In addition, in an actual application, the face recognition request may include a digital identity information file of the target user, where digital identity information of the target user is recorded in the digital identity information file, and the process of extracting features of the two-dimensional image based on the first intelligent contract may further include: in the blockchain system, searching whether a digital identity information file recorded with the digital identity information of the target user exists in the digital identity information file stored in the blockchain system in advance; and if the two-dimensional image exists, carrying out feature extraction on the two-dimensional image based on the first intelligent contract to obtain the image features corresponding to the two-dimensional image.
The digital identity information may refer to information that a user can identifiably characterize through digital information, that is, the real identity information is concentrated into a digital code form for presentation, so that the real-time behavior information of the user is bound, queried and verified. The digital identity information not only can contain the identity code information of birth information, individual description, biological characteristics and the like of the seed user, but also can relate to personal behavior information (such as transaction information, entertainment information and the like) with various attributes. The digital identity information may be presented in a variety of ways, such as DID (Decentralized Identity, de-centralized identity), and the like. For the case that the digital identity information is the DID, in practical application, the DID of the target user may include a DID file (such as a DID Document), and the target user may preset the digital identity information of the target user in the DID file.
In implementation, after the blockchain system acquires the face recognition request, the blockchain system may acquire the digital identity information file of the target user, and may extract the digital identity information of the target user from the digital identity information file, for example, if the digital identity information of the target user is DID, the digital identity information file may be a DID Document, the blockchain system may acquire the DID Document of the target user, where the DID Document records the DID of the target user, and at this time, the blockchain system may extract the DID of the target user from the DID Document. The DID of the target user may be extracted from the DID Document requested by the face recognition, and if the two DID are the same, feature extraction is performed on the two-dimensional image based on the first intelligent contract to obtain an image feature corresponding to the two-dimensional image, and the specific processing procedure may refer to the related content and will not be described herein.
In step S620, a pre-trained second feature extraction model is obtained based on the first intelligent contract, and the face recognition reference image is input into the second feature extraction model to perform feature extraction on the face recognition reference image, so as to obtain image features corresponding to the face recognition reference image, where the second feature extraction model is obtained by performing model training based on a plurality of different historical three-dimensional images.
In step S622, the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image are input into a pre-trained cross-mode face recognition model based on the first smart contract, so as to perform face recognition processing on the target user, and obtain a recognition result corresponding to the face recognition request.
The embodiment of the specification provides a face recognition method, which receives a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, a pre-stored face recognition reference image is obtained from a blockchain system based on a pre-deployed first intelligent contract, the face recognition reference image is a three-dimensional image, the first intelligent contract is used for triggering the face recognition processing of the user initiating the face recognition request, the feature extraction is carried out on the two-dimensional image based on the first intelligent contract, the image feature corresponding to the two-dimensional image is obtained, the feature extraction is carried out on the face recognition reference image based on the first intelligent contract, the image feature corresponding to the face recognition reference image is obtained through the first intelligent contract, the face recognition processing is carried out on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain the recognition result corresponding to the face recognition request, thus, the mutual ratio of the face features between the two-dimensional face image and the three-dimensional face image can be realized through the blockchain system, thereby realizing the positioning of the user identity corresponding to the two-dimensional image from the reserved base of the three-dimensional image, realizing the cross-modal face recognition, realizing the verification of the user identity through the face recognition on the premise of not reserving the two-dimensional face image of the user as the reserved base, greatly improving the safety of the system and the protection of the privacy of the user, ensuring the high compatibility and expansibility of the acquisition end equipment, storing the database of the reserved base information of the three-dimensional face image of the user, having the characteristics of low vision identification, high privacy safety and the like, thereby further protecting the user's private data.
Example five
The face recognition method provided in the embodiment of the present disclosure is based on the same concept, and the embodiment of the present disclosure further provides a face recognition device, as shown in fig. 7.
The face recognition device includes: an image acquisition module 701, a first feature extraction module 702, and a face recognition module 603, wherein:
an image acquisition module 701, configured to acquire a pre-stored face recognition reference image when acquiring a face recognition request of a target user, where the face recognition request includes a captured two-dimensional image including a face of the target user, and the face recognition reference image is a three-dimensional image;
the first feature extraction module 702 performs feature extraction on the two-dimensional image to obtain an image feature corresponding to the two-dimensional image, and performs feature extraction on the face recognition reference image to obtain an image feature corresponding to the face recognition reference image;
the face recognition module 703 performs a face recognition process on the target user based on the image feature corresponding to the two-dimensional image and the image feature corresponding to the face recognition reference image, and obtains a recognition result corresponding to the face recognition request.
In an embodiment of the present disclosure, the apparatus further includes:
a face detection module for detecting a region of the two-dimensional image where the face of the target user is located;
the clipping module clips the two-dimensional image based on the detected area of the face of the target user in the two-dimensional image, so as to obtain a clipped two-dimensional image;
the first feature extraction module 702 performs feature extraction on the clipped two-dimensional image to obtain an image feature corresponding to the two-dimensional image.
In this embodiment of the present disclosure, the first feature extraction module 702 inputs the two-dimensional image into a pre-trained first feature extraction model to perform feature extraction on the two-dimensional image, so as to obtain an image feature corresponding to the two-dimensional image, where the first feature extraction model is obtained by performing model training based on a plurality of different historical two-dimensional images.
In this embodiment of the present disclosure, the first feature extraction module 702 inputs the face recognition reference image into a pre-trained second feature extraction model, so as to perform feature extraction on the face recognition reference image, so as to obtain image features corresponding to the face recognition reference image, where the second feature extraction model is obtained by performing model training based on a plurality of different historical three-dimensional images.
In this embodiment of the present disclosure, the face recognition module 703 inputs the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a pre-trained cross-modal face recognition model, so as to perform face recognition processing on the target user, and obtain a recognition result corresponding to the face recognition request, where the cross-modal face recognition model is obtained through supervised training based on historical two-dimensional images and historical three-dimensional images of a plurality of different users.
In this embodiment of the present specification, further includes:
the model framework construction module is used for constructing a model framework of the cross-modal facial recognition model based on a preset machine learning algorithm;
a data acquisition module that acquires first triad data based on a two-dimensional image for a first user and second triad data based on a three-dimensional image for the first user, the first triad data including: a first two-dimensional image of the first user, a second two-dimensional image of a user having the same user identification as the first user, and a third two-dimensional image of a user having a different user identification than the first user, the second triplet data comprising: a first three-dimensional image of the first user, a second three-dimensional image of a user having the same user identification as the first user, and a third three-dimensional image of a user having a different user identification than the first user;
The second feature extraction module is used for carrying out feature extraction on the first triplet data and the second triplet data respectively to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data;
and the model training module is used for performing supervision training on the cross-modal face recognition model based on the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data, pulling up the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of the same user mark, and pulling apart the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of different user marks to obtain the trained cross-modal face recognition model.
In an embodiment of the present disclosure, the second feature extraction module includes:
the first feature extraction unit is used for carrying out feature extraction on the first triplet data through the first feature extraction model to obtain an image feature set corresponding to the first triplet data;
and the second feature extraction unit is used for carrying out feature extraction on the second triplet data through the second feature extraction model to obtain an image feature set corresponding to the second triplet data.
In an embodiment of the present disclosure, the apparatus further includes:
and the supervision training module is used for respectively performing supervision training on the first feature extraction model and the second feature extraction model in a counter-propagation mode to obtain the trained first feature extraction model and the trained second feature extraction model.
The embodiment of the specification provides a face recognition device, under the condition that a face recognition request of a target user is acquired, a prestored face recognition reference image is acquired, the face recognition request comprises a shot two-dimensional image comprising the face of the target user, the face recognition reference image is a three-dimensional image, then, feature extraction is carried out on the two-dimensional image to obtain image features corresponding to the two-dimensional image, feature extraction is carried out on the face recognition reference image to obtain image features corresponding to the face recognition reference image, finally, face recognition processing is carried out on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request, in this way, the face feature mutual comparison between the two-dimensional face image and the three-dimensional face image can be realized, so that the user identity corresponding to the two-dimensional image is positioned in a reserved base of the three-dimensional image, cross-mode face recognition is realized, and on the premise that the two-dimensional face image of the user is not reserved, verification of the user identity is realized through face recognition, the safety of the system and the privacy protection of the user are greatly improved, in addition, the high-dimensional face recognition information and the high-expansibility of face recognition device and the three-dimensional face information are guaranteed, the three-dimensional image has high-dimensional image safety and low-dimensional image storage performance, and high-dimensional image compatibility.
In addition, through a training process and a supervision mechanism, the mutual facial feature ratio between the two-dimensional facial image and the three-dimensional facial image can be realized, so that the user identity corresponding to the two-dimensional image is positioned from a reserved base of the three-dimensional image, and the cross-modal facial recognition is realized.
Example six
Based on the same concept, the embodiment of the present disclosure further provides a facial recognition apparatus, as shown in fig. 8.
The face recognition device includes: a request module 801, an image acquisition module 802, a feature extraction module 803, and a face recognition module 804, wherein:
a request module 801, configured to receive a face recognition request of a target user sent by a terminal device, where the face recognition request includes the captured two-dimensional image including the face of the target user;
an image acquisition module 802, configured to acquire a pre-stored face recognition reference image from the device based on a pre-deployed first smart contract, where the face recognition reference image is a three-dimensional image, and the first smart contract is configured to trigger a face recognition process for a user who initiates a face recognition request;
the feature extraction module 803 is used for extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image;
And the face recognition module 804 performs face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image through the first intelligent contract, so as to obtain a recognition result corresponding to the face recognition request.
In the embodiment of the present specification, the face recognition request includes a verifiable statement of the two-dimensional image, and the feature extraction module 803 includes:
a verification unit that verifies the validity of the verifiable statement;
and the feature extraction unit is used for extracting the features of the two-dimensional image based on the first intelligent contract if the verification result is valid, so as to obtain the image features corresponding to the two-dimensional image.
In this embodiment of the present disclosure, the face recognition request includes a digital identity information file of the target user, where digital identity information of the target user is recorded in the digital identity information file, and the feature extraction module 803 includes:
the searching unit is used for searching whether a digital identity information file recorded with the digital identity information of the target user exists in the digital identity information file stored in advance in the blockchain system or not;
And the feature extraction unit is used for extracting the features of the two-dimensional image based on the first intelligent contract if the features exist, so as to obtain the image features corresponding to the two-dimensional image.
The embodiment of the specification provides a face recognition device, a face recognition request of a target user sent by terminal equipment is received, the face recognition request comprises a shot two-dimensional image comprising the face of the target user, a prestored face recognition reference image is obtained from a blockchain system based on a pre-deployed first intelligent contract, the face recognition reference image is a three-dimensional image, the first intelligent contract is used for triggering the face recognition processing of the user initiating the face recognition request, the feature extraction is carried out on the two-dimensional image based on the first intelligent contract, the image feature corresponding to the two-dimensional image is obtained, the feature extraction is carried out on the face recognition reference image based on the first intelligent contract, the image feature corresponding to the face recognition reference image is obtained, the face recognition processing is carried out on the target user based on the image feature corresponding to the two-dimensional image and the image feature corresponding to the face recognition reference image, the recognition result corresponding to the face recognition request is obtained, in this way, the face feature comparison between the two-dimensional face image and the three-dimensional face image can be realized through the blockchain system, the identity corresponding to the two-dimensional image can be positioned from a reserved base, the face recognition of the user can be realized, moreover, the face recognition of the face of the user can be realized under the condition that the face recognition of the two-dimensional image is not be realized, the face recognition of the face recognition information corresponding to the face recognition device is realized, the face recognition device is high in the safety of the face recognition device is realized, the face recognition device is high in the safety, the face recognition device is high in the safety of the face recognition device is high, the face safety of the user is and the face recognition device is high, and the user is high in the safety, and the user is safe, thereby further protecting the user's private data.
Example seven
The face recognition device provided in the embodiment of the present disclosure further provides a face recognition apparatus based on the same concept, as shown in fig. 9.
The face recognition device may be a server, a terminal device, or a device in a blockchain system, or the like provided in the above embodiments.
The facial recognition device may vary considerably in configuration or performance, may include one or more processors 901 and memory 902, and may have one or more stored applications or data stored in memory 902. Wherein the memory 902 may be transient storage or persistent storage. The application programs stored in the memory 902 may include one or more modules (not shown in the figures), each of which may include a series of computer-executable instructions in the facial recognition apparatus. Still further, the processor 901 may be arranged to communicate with the memory 902 to execute a series of computer executable instructions in the memory 902 on the facial recognition device. The facial recognition device may also include one or more power supplies 903, one or more wired or wireless network interfaces 904, one or more input output interfaces 905, and one or more keyboards 906.
In particular, in this embodiment, the facial recognition device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions in the facial recognition device, and the execution of the one or more programs by the one or more processors comprises computer-executable instructions for:
acquiring a pre-stored face recognition reference image under the condition that a face recognition request of a target user is acquired, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image;
extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image;
and carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request.
In this embodiment of the present specification, further includes:
detecting an area where the face of the target user is located in the two-dimensional image;
based on the detected region of the face of the target user in the two-dimensional image, cutting the two-dimensional image to obtain a cut two-dimensional image;
the feature extraction is performed on the two-dimensional image to obtain the image feature corresponding to the two-dimensional image, including:
and extracting the characteristics of the cut two-dimensional image to obtain the image characteristics corresponding to the two-dimensional image.
In this embodiment of the present disclosure, the feature extraction of the two-dimensional image to obtain an image feature corresponding to the two-dimensional image includes:
inputting the two-dimensional image into a pre-trained first feature extraction model to extract features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, wherein the first feature extraction model is obtained by model training based on a plurality of different historical two-dimensional images.
In this embodiment of the present disclosure, the feature extraction of the face recognition reference image to obtain an image feature corresponding to the face recognition reference image includes:
Inputting the face recognition reference image into a pre-trained second feature extraction model to perform feature extraction on the face recognition reference image to obtain image features corresponding to the face recognition reference image, wherein the second feature extraction model is obtained by performing model training based on a plurality of different historical three-dimensional images.
In this embodiment of the present disclosure, the performing, based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image, face recognition processing on the target user to obtain a recognition result corresponding to the face recognition request includes:
inputting the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a pre-trained cross-mode face recognition model, so as to perform face recognition processing on the target user, and obtain a recognition result corresponding to the face recognition request, wherein the cross-mode face recognition model is obtained through supervision training based on historical two-dimensional images and historical three-dimensional images of a plurality of different users.
In this embodiment of the present specification, further includes:
constructing a model framework of the cross-modal face recognition model based on a preset machine learning algorithm;
Acquiring first triad data based on two-dimensional images for a first user and second triad data based on three-dimensional images for the first user, wherein the first triad data comprises: a first two-dimensional image of the first user, a second two-dimensional image of a user having the same user identification as the first user, and a third two-dimensional image of a user having a different user identification than the first user, the second triplet data comprising: a first three-dimensional image of the first user, a second three-dimensional image of a user having the same user identification as the first user, and a third three-dimensional image of a user having a different user identification than the first user;
respectively carrying out feature extraction on the first triplet data and the second triplet data to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data;
based on the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data, the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of the same user identification is pulled, and the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of different user identifications is pulled for training purposes, the cross-modal face recognition model is subjected to supervision training, and the trained cross-modal face recognition model is obtained.
In this embodiment of the present disclosure, the feature extracting the first triplet data and the second triplet data to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data includes:
performing feature extraction on the first triplet data through the first feature extraction model to obtain an image feature set corresponding to the first triplet data;
and carrying out feature extraction on the second triplet data through the second feature extraction model to obtain an image feature set corresponding to the second triplet data.
In this embodiment of the present specification, further includes:
and respectively performing supervision training on the first feature extraction model and the second feature extraction model in a counter-propagation mode to obtain the trained first feature extraction model and the trained second feature extraction model.
Further, in particular in the present embodiment, the face recognition device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer executable instructions in the face recognition device, and the one or more programs configured to be executed by the one or more processors include computer executable instructions for:
Receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user;
acquiring a pre-stored face recognition reference image from the blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request;
extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image;
and carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image through the first intelligent contract to obtain a recognition result corresponding to the face recognition request.
In this embodiment of the present disclosure, the face recognition request includes a verifiable statement of the two-dimensional image, and the feature extraction is performed on the two-dimensional image based on the first intelligent contract to obtain an image feature corresponding to the two-dimensional image, including:
Verifying the validity of the verifiable statement;
and if the verification result is valid, extracting the features of the two-dimensional image based on the first intelligent contract to obtain the image features corresponding to the two-dimensional image.
In this embodiment of the present disclosure, the face recognition request includes a digital identity information file of the target user, where digital identity information of the target user is recorded in the digital identity information file, and the feature extraction is performed on the two-dimensional image based on the first intelligent contract to obtain an image feature corresponding to the two-dimensional image, where the feature extraction includes:
in the blockchain system, searching whether a digital identity information file recorded with the digital identity information of the target user exists in the digital identity information file stored in advance in the blockchain system;
and if so, extracting the characteristics of the two-dimensional image based on the first intelligent contract to obtain the image characteristics corresponding to the two-dimensional image.
The embodiment of the specification provides a face recognition device, under the condition that a face recognition request of a target user is acquired, a prestored face recognition reference image is acquired, the face recognition request comprises a shot two-dimensional image comprising the face of the target user, the face recognition reference image is a three-dimensional image, then, feature extraction is carried out on the two-dimensional image to obtain image features corresponding to the two-dimensional image, feature extraction is carried out on the face recognition reference image to obtain image features corresponding to the face recognition reference image, finally, face recognition processing is carried out on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request, in this way, the face feature mutual comparison between the two-dimensional face image and the three-dimensional face image can be realized, so that the user identity corresponding to the two-dimensional image is positioned in a reserved base of the three-dimensional image, cross-mode face recognition is realized, and on the premise that the two-dimensional face image of the user is not reserved, verification of the user identity is realized through face recognition, the safety of the system and the privacy protection of the user are greatly improved, in addition, the high-dimensional face recognition device and the high-expansibility and the low-dimensional face recognition data of the device are guaranteed, the three-dimensional face recognition device is stored, and the three-dimensional face recognition device has high visual safety information.
In addition, through a training process and a supervision mechanism, the mutual facial feature ratio between the two-dimensional facial image and the three-dimensional facial image can be realized, so that the user identity corresponding to the two-dimensional image is positioned from a reserved base of the three-dimensional image, and the cross-modal facial recognition is realized.
Example eight
Further, based on the methods shown in fig. 1A and fig. 6, one or more embodiments of the present disclosure further provide a storage medium, which is used to store computer executable instruction information, and in a specific embodiment, the storage medium may be a U disc, an optical disc, a hard disk, etc., where the computer executable instruction information stored in the storage medium can implement the following flow when executed by a processor:
acquiring a pre-stored face recognition reference image under the condition that a face recognition request of a target user is acquired, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image;
extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image;
And carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request.
In this embodiment of the present specification, further includes:
detecting an area where the face of the target user is located in the two-dimensional image;
based on the detected region of the face of the target user in the two-dimensional image, cutting the two-dimensional image to obtain a cut two-dimensional image;
the feature extraction is performed on the two-dimensional image to obtain the image feature corresponding to the two-dimensional image, including:
and extracting the characteristics of the cut two-dimensional image to obtain the image characteristics corresponding to the two-dimensional image.
In this embodiment of the present disclosure, the feature extraction of the two-dimensional image to obtain an image feature corresponding to the two-dimensional image includes:
inputting the two-dimensional image into a pre-trained first feature extraction model to extract features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, wherein the first feature extraction model is obtained by model training based on a plurality of different historical two-dimensional images.
In this embodiment of the present disclosure, the feature extraction of the face recognition reference image to obtain an image feature corresponding to the face recognition reference image includes:
inputting the face recognition reference image into a pre-trained second feature extraction model to perform feature extraction on the face recognition reference image to obtain image features corresponding to the face recognition reference image, wherein the second feature extraction model is obtained by performing model training based on a plurality of different historical three-dimensional images.
In this embodiment of the present disclosure, the performing, based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image, face recognition processing on the target user to obtain a recognition result corresponding to the face recognition request includes:
inputting the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a pre-trained cross-mode face recognition model, so as to perform face recognition processing on the target user, and obtain a recognition result corresponding to the face recognition request, wherein the cross-mode face recognition model is obtained through supervision training based on historical two-dimensional images and historical three-dimensional images of a plurality of different users.
In this embodiment of the present specification, further includes:
constructing a model framework of the cross-modal face recognition model based on a preset machine learning algorithm;
acquiring first triad data based on two-dimensional images for a first user and second triad data based on three-dimensional images for the first user, wherein the first triad data comprises: a first two-dimensional image of the first user, a second two-dimensional image of a user having the same user identification as the first user, and a third two-dimensional image of a user having a different user identification than the first user, the second triplet data comprising: a first three-dimensional image of the first user, a second three-dimensional image of a user having the same user identification as the first user, and a third three-dimensional image of a user having a different user identification than the first user;
respectively carrying out feature extraction on the first triplet data and the second triplet data to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data;
based on the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data, the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of the same user identification is pulled, and the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of different user identifications is pulled for training purposes, the cross-modal face recognition model is subjected to supervision training, and the trained cross-modal face recognition model is obtained.
In this embodiment of the present disclosure, the feature extracting the first triplet data and the second triplet data to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data includes:
performing feature extraction on the first triplet data through the first feature extraction model to obtain an image feature set corresponding to the first triplet data;
and carrying out feature extraction on the second triplet data through the second feature extraction model to obtain an image feature set corresponding to the second triplet data.
In this embodiment of the present specification, further includes:
and respectively performing supervision training on the first feature extraction model and the second feature extraction model in a counter-propagation mode to obtain the trained first feature extraction model and the trained second feature extraction model.
In addition, in another specific embodiment, the storage medium may be a usb disk, an optical disc, a hard disk, or the like, where the computer executable instruction information stored in the storage medium, when executed by the processor, can implement the following flow:
receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user;
Acquiring a pre-stored face recognition reference image from the blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request;
extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image;
and carrying out face recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image through the first intelligent contract to obtain a recognition result corresponding to the face recognition request.
In this embodiment of the present disclosure, the face recognition request includes a verifiable statement of the two-dimensional image, and the feature extraction is performed on the two-dimensional image based on the first intelligent contract to obtain an image feature corresponding to the two-dimensional image, including:
verifying the validity of the verifiable statement;
And if the verification result is valid, extracting the features of the two-dimensional image based on the first intelligent contract to obtain the image features corresponding to the two-dimensional image.
In this embodiment of the present disclosure, the face recognition request includes a digital identity information file of the target user, where digital identity information of the target user is recorded in the digital identity information file, and the feature extraction is performed on the two-dimensional image based on the first intelligent contract to obtain an image feature corresponding to the two-dimensional image, where the feature extraction includes:
in the blockchain system, searching whether a digital identity information file recorded with the digital identity information of the target user exists in the digital identity information file stored in advance in the blockchain system;
and if so, extracting the characteristics of the two-dimensional image based on the first intelligent contract to obtain the image characteristics corresponding to the two-dimensional image.
The embodiment of the specification provides a storage medium, under the condition that a face recognition request of a target user is acquired, a prestored face recognition reference image is acquired, the face recognition request comprises a shot two-dimensional image comprising the face of the target user, the face recognition reference image is a three-dimensional image, then, feature extraction is carried out on the two-dimensional image to obtain image features corresponding to the two-dimensional image, feature extraction is carried out on the face recognition reference image to obtain image features corresponding to the face recognition reference image, finally, face recognition processing is carried out on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image to obtain a recognition result corresponding to the face recognition request, in this way, the mutual comparison of the face features between the two-dimensional face image and the three-dimensional face image can be realized, so that the identity of the user corresponding to the two-dimensional image is positioned in a reserved base of the three-dimensional image is realized, verification of the identity of the user is realized through face recognition on the premise that the two-dimensional face image of the user is not reserved, the safety of the system and the privacy of the user is greatly improved, in addition, the recognition result of the face recognition of the user is guaranteed to have high expansibility and the face information of the three-dimensional face image, the three-dimensional image is high in the visual safety, the three-dimensional image is stored, and the three-dimensional image has high-dimensional data compatibility, and the visual safety is compatible.
In addition, through a training process and a supervision mechanism, the mutual facial feature ratio between the two-dimensional facial image and the three-dimensional facial image can be realized, so that the user identity corresponding to the two-dimensional image is positioned from a reserved base of the three-dimensional image, and the cross-modal facial recognition is realized.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing one or more embodiments of the present description.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable fraud case serial-to-parallel device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable fraud case serial-to-parallel device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present description may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.
Claims (17)
1. A method of face recognition, the method comprising:
acquiring a pre-stored face recognition reference image under the condition that a face recognition request of a target user is acquired, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image;
extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image;
inputting the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a cross-modal face recognition model to perform face recognition processing on the target user so as to obtain a recognition result corresponding to the face recognition request;
the cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
2. The method of claim 1, the method further comprising:
detecting an area where the face of the target user is located in the two-dimensional image;
based on the detected region of the face of the target user in the two-dimensional image, cutting the two-dimensional image to obtain a cut two-dimensional image;
the feature extraction is performed on the two-dimensional image to obtain the image feature corresponding to the two-dimensional image, including:
and extracting the characteristics of the cut two-dimensional image to obtain the image characteristics corresponding to the two-dimensional image.
3. The method according to claim 1 or 2, wherein the feature extraction of the two-dimensional image to obtain the image feature corresponding to the two-dimensional image includes:
inputting the two-dimensional image into a pre-trained first feature extraction model to extract features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, wherein the first feature extraction model is obtained by model training based on a plurality of different historical two-dimensional images.
4. A method according to claim 3, wherein the feature extraction of the face recognition reference image to obtain the image feature corresponding to the face recognition reference image includes:
Inputting the face recognition reference image into a pre-trained second feature extraction model to perform feature extraction on the face recognition reference image to obtain image features corresponding to the face recognition reference image, wherein the second feature extraction model is obtained by performing model training based on a plurality of different historical three-dimensional images.
5. The method according to claim 4, wherein the performing facial recognition processing on the target user based on the image features corresponding to the two-dimensional image and the image features corresponding to the facial recognition reference image to obtain the recognition result corresponding to the facial recognition request includes:
inputting the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a pre-trained cross-mode face recognition model, so as to perform face recognition processing on the target user, and obtain a recognition result corresponding to the face recognition request, wherein the cross-mode face recognition model is obtained through supervision training based on historical two-dimensional images and historical three-dimensional images of a plurality of different users.
6. The method of claim 5, the method further comprising:
constructing a model framework of the cross-modal face recognition model based on a preset machine learning algorithm;
Acquiring first triad data based on two-dimensional images for a first user and second triad data based on three-dimensional images for the first user, wherein the first triad data comprises: a first two-dimensional image of the first user, a second two-dimensional image of a user having the same user identification as the first user, and a third two-dimensional image of a user having a different user identification than the first user, the second triplet data comprising: a first three-dimensional image of the first user, a second three-dimensional image of a user having the same user identification as the first user, and a third three-dimensional image of a user having a different user identification than the first user;
respectively carrying out feature extraction on the first triplet data and the second triplet data to obtain an image feature set corresponding to the first triplet data and an image feature set corresponding to the second triplet data;
based on the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data, the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of the same user identification is pulled, and the similarity between the image features corresponding to the two-dimensional images and the image features corresponding to the three-dimensional images of different user identifications is pulled for training purposes, the cross-modal face recognition model is subjected to supervision training, and the trained cross-modal face recognition model is obtained.
7. The method of claim 6, wherein the feature extracting the first triplet data and the second triplet data to obtain the image feature set corresponding to the first triplet data and the image feature set corresponding to the second triplet data includes:
performing feature extraction on the first triplet data through the first feature extraction model to obtain an image feature set corresponding to the first triplet data;
and carrying out feature extraction on the second triplet data through the second feature extraction model to obtain an image feature set corresponding to the second triplet data.
8. The method of claim 7, the method further comprising:
and respectively performing supervision training on the first feature extraction model and the second feature extraction model in a counter-propagation mode to obtain the trained first feature extraction model and the trained second feature extraction model.
9. A face recognition method applied to a blockchain system, the method comprising:
receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user;
Acquiring a pre-stored face recognition reference image from the blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request;
extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image;
inputting image features corresponding to the two-dimensional image and image features corresponding to the face recognition reference image into a cross-modal face recognition model through the first intelligent contract so as to perform face recognition processing on the target user and obtain a recognition result corresponding to the face recognition request;
the cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
10. The method of claim 9, wherein the face recognition request includes a verifiable statement of the two-dimensional image, and the feature extraction is performed on the two-dimensional image based on the first intelligent contract, so as to obtain image features corresponding to the two-dimensional image, including:
verifying the validity of the verifiable statement;
and if the verification result is valid, extracting the features of the two-dimensional image based on the first intelligent contract to obtain the image features corresponding to the two-dimensional image.
11. The method of claim 9, wherein the face recognition request includes a digital identity information file of the target user, the digital identity information file having digital identity information of the target user recorded therein, and the feature extraction is performed on the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, including:
in the blockchain system, searching whether a digital identity information file recorded with the digital identity information of the target user exists in the digital identity information file stored in advance in the blockchain system;
and if so, extracting the characteristics of the two-dimensional image based on the first intelligent contract to obtain the image characteristics corresponding to the two-dimensional image.
12. A facial recognition apparatus, the apparatus comprising:
an image acquisition module, configured to acquire a pre-stored face recognition reference image when acquiring a face recognition request of a target user, where the face recognition request includes a captured two-dimensional image including a face of the target user, and the face recognition reference image is a three-dimensional image;
the first feature extraction module is used for carrying out feature extraction on the two-dimensional image to obtain image features corresponding to the two-dimensional image, and carrying out feature extraction on the face recognition reference image to obtain image features corresponding to the face recognition reference image;
the face recognition module is used for inputting the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a cross-mode face recognition model so as to perform face recognition processing on the target user and obtain a recognition result corresponding to the face recognition request;
the cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
13. A facial recognition apparatus, the apparatus comprising:
the face recognition module is used for receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user;
the image acquisition module is used for acquiring a pre-stored face recognition reference image from the device based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request;
the feature extraction module is used for carrying out feature extraction on the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and carrying out feature extraction on the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image;
the face recognition module inputs the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a cross-mode face recognition model through the first intelligent contract so as to perform face recognition processing on the target user and obtain a recognition result corresponding to the face recognition request;
The cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
14. A face recognition device, the face recognition device comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring a pre-stored face recognition reference image under the condition that a face recognition request of a target user is acquired, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image;
extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image;
Inputting the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a cross-modal face recognition model to perform face recognition processing on the target user so as to obtain a recognition result corresponding to the face recognition request;
the cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
15. A storage medium for storing computer-executable instructions that when executed implement the following:
acquiring a pre-stored face recognition reference image under the condition that a face recognition request of a target user is acquired, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user, and the face recognition reference image is a three-dimensional image;
Extracting features of the two-dimensional image to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image to obtain image features corresponding to the face recognition reference image;
inputting the image features corresponding to the two-dimensional image and the image features corresponding to the face recognition reference image into a cross-modal face recognition model to perform face recognition processing on the target user so as to obtain a recognition result corresponding to the face recognition request;
the cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
16. A facial recognition device, the facial recognition device being a device in a blockchain system, comprising:
A processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user;
acquiring a pre-stored face recognition reference image from the blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request;
extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image;
inputting image features corresponding to the two-dimensional image and image features corresponding to the face recognition reference image into a cross-modal face recognition model through the first intelligent contract so as to perform face recognition processing on the target user and obtain a recognition result corresponding to the face recognition request;
The cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
17. A storage medium for storing computer-executable instructions that when executed implement the following:
receiving a face recognition request of a target user sent by a terminal device, wherein the face recognition request comprises a shot two-dimensional image comprising the face of the target user;
acquiring a pre-stored face recognition reference image from a blockchain system based on a pre-deployed first intelligent contract, wherein the face recognition reference image is a three-dimensional image, and the first intelligent contract is used for triggering face recognition processing on a user initiating a face recognition request;
Extracting features of the two-dimensional image based on the first intelligent contract to obtain image features corresponding to the two-dimensional image, and extracting features of the face recognition reference image based on the first intelligent contract to obtain image features corresponding to the face recognition reference image;
inputting image features corresponding to the two-dimensional image and image features corresponding to the face recognition reference image into a cross-modal face recognition model through the first intelligent contract so as to perform face recognition processing on the target user and obtain a recognition result corresponding to the face recognition request;
the cross-modal face recognition model is obtained by performing feature extraction on first triad data composed of two-dimensional images of a first user, a user with the same user identification as the first user and a user with different user identifications as the first user, and performing supervision training on a second triad data composed of three-dimensional images of the first user, the user with the same user identification as the first user and the user with different user identifications as the first user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110886912.3A CN113673374B (en) | 2021-08-03 | 2021-08-03 | Face recognition method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110886912.3A CN113673374B (en) | 2021-08-03 | 2021-08-03 | Face recognition method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113673374A CN113673374A (en) | 2021-11-19 |
CN113673374B true CN113673374B (en) | 2024-01-30 |
Family
ID=78541234
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110886912.3A Active CN113673374B (en) | 2021-08-03 | 2021-08-03 | Face recognition method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113673374B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115294284B (en) * | 2022-10-09 | 2022-12-20 | 南京纯白矩阵科技有限公司 | High-resolution three-dimensional model generation method for guaranteeing uniqueness of generated model |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1139269A2 (en) * | 2000-03-30 | 2001-10-04 | Nec Corporation | Method for matching a two-dimensional image to one of a plurality of three-dimensional candidate models contained in a database |
CN1879113A (en) * | 2003-11-10 | 2006-12-13 | 全感知有限公司 | 2d face anthentication system |
CN109086691A (en) * | 2018-07-16 | 2018-12-25 | 阿里巴巴集团控股有限公司 | A kind of three-dimensional face biopsy method, face's certification recognition methods and device |
CN109460690A (en) * | 2017-09-01 | 2019-03-12 | 虹软(杭州)多媒体信息技术有限公司 | A kind of method and apparatus for pattern-recognition |
WO2019080580A1 (en) * | 2017-10-26 | 2019-05-02 | 深圳奥比中光科技有限公司 | 3d face identity authentication method and apparatus |
CN111046704A (en) * | 2018-10-12 | 2020-04-21 | 杭州海康威视数字技术股份有限公司 | Method and device for storing identity identification information |
CN111310734A (en) * | 2020-03-19 | 2020-06-19 | 支付宝(杭州)信息技术有限公司 | Face recognition method and device for protecting user privacy |
KR20200098875A (en) * | 2019-02-13 | 2020-08-21 | 주식회사 휴먼아이씨티 | System and method for providing 3D face recognition |
CN111783593A (en) * | 2020-06-23 | 2020-10-16 | 中国平安人寿保险股份有限公司 | Human face recognition method and device based on artificial intelligence, electronic equipment and medium |
WO2020215283A1 (en) * | 2019-04-25 | 2020-10-29 | 深圳市汇顶科技股份有限公司 | Facial recognition method, processing chip and electronic device |
CN112052834A (en) * | 2020-09-29 | 2020-12-08 | 支付宝(杭州)信息技术有限公司 | Face recognition method, device and equipment based on privacy protection |
CN112270747A (en) * | 2020-11-10 | 2021-01-26 | 杭州海康威视数字技术股份有限公司 | Face recognition method and device and electronic equipment |
CN112884479A (en) * | 2021-01-29 | 2021-06-01 | 浙江创泰科技有限公司 | Anti-theft self-service parking payment method, system, device and storage medium |
CN113033243A (en) * | 2019-12-09 | 2021-06-25 | 漳州立达信光电子科技有限公司 | Face recognition method, device and equipment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4696778B2 (en) * | 2005-08-23 | 2011-06-08 | コニカミノルタホールディングス株式会社 | Authentication apparatus, authentication method, and program |
TWI382354B (en) * | 2008-12-02 | 2013-01-11 | Nat Univ Tsing Hua | Face recognition method |
US20160070952A1 (en) * | 2014-09-05 | 2016-03-10 | Samsung Electronics Co., Ltd. | Method and apparatus for facial recognition |
KR101643573B1 (en) * | 2014-11-21 | 2016-07-29 | 한국과학기술연구원 | Method for face recognition, recording medium and device for performing the method |
KR20170000748A (en) * | 2015-06-24 | 2017-01-03 | 삼성전자주식회사 | Method and apparatus for face recognition |
US11244146B2 (en) * | 2019-03-05 | 2022-02-08 | Jpmorgan Chase Bank, N.A. | Systems and methods for secure user logins with facial recognition and blockchain |
-
2021
- 2021-08-03 CN CN202110886912.3A patent/CN113673374B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1139269A2 (en) * | 2000-03-30 | 2001-10-04 | Nec Corporation | Method for matching a two-dimensional image to one of a plurality of three-dimensional candidate models contained in a database |
CN1879113A (en) * | 2003-11-10 | 2006-12-13 | 全感知有限公司 | 2d face anthentication system |
CN109460690A (en) * | 2017-09-01 | 2019-03-12 | 虹软(杭州)多媒体信息技术有限公司 | A kind of method and apparatus for pattern-recognition |
WO2019080580A1 (en) * | 2017-10-26 | 2019-05-02 | 深圳奥比中光科技有限公司 | 3d face identity authentication method and apparatus |
CN109086691A (en) * | 2018-07-16 | 2018-12-25 | 阿里巴巴集团控股有限公司 | A kind of three-dimensional face biopsy method, face's certification recognition methods and device |
CN111046704A (en) * | 2018-10-12 | 2020-04-21 | 杭州海康威视数字技术股份有限公司 | Method and device for storing identity identification information |
KR20200098875A (en) * | 2019-02-13 | 2020-08-21 | 주식회사 휴먼아이씨티 | System and method for providing 3D face recognition |
WO2020215283A1 (en) * | 2019-04-25 | 2020-10-29 | 深圳市汇顶科技股份有限公司 | Facial recognition method, processing chip and electronic device |
CN113033243A (en) * | 2019-12-09 | 2021-06-25 | 漳州立达信光电子科技有限公司 | Face recognition method, device and equipment |
CN111310734A (en) * | 2020-03-19 | 2020-06-19 | 支付宝(杭州)信息技术有限公司 | Face recognition method and device for protecting user privacy |
CN111783593A (en) * | 2020-06-23 | 2020-10-16 | 中国平安人寿保险股份有限公司 | Human face recognition method and device based on artificial intelligence, electronic equipment and medium |
CN112052834A (en) * | 2020-09-29 | 2020-12-08 | 支付宝(杭州)信息技术有限公司 | Face recognition method, device and equipment based on privacy protection |
CN112270747A (en) * | 2020-11-10 | 2021-01-26 | 杭州海康威视数字技术股份有限公司 | Face recognition method and device and electronic equipment |
CN112884479A (en) * | 2021-01-29 | 2021-06-01 | 浙江创泰科技有限公司 | Anti-theft self-service parking payment method, system, device and storage medium |
Non-Patent Citations (1)
Title |
---|
基于二维纹理重建三维人脸深度图像后的人脸识别;李睿;李科;孙家炜;;现代计算机(专业版)(10);58-61 * |
Also Published As
Publication number | Publication date |
---|---|
CN113673374A (en) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Unobservable re-authentication for smartphones. | |
US20200005019A1 (en) | Living body detection method, system and non-transitory computer-readable recording medium | |
KR20190072563A (en) | Method and apparatus for detecting facial live varnish, and electronic device | |
CN104899490B (en) | A kind of method of locating terminal and user terminal | |
CN111506889B (en) | User verification method and device based on similar user group | |
CN109886697A (en) | Method, apparatus and electronic equipment are determined based on the other operation of expression group | |
CN113794569B (en) | Material inventory data providing method, device and system based on block chain | |
CN107679457A (en) | User identity method of calibration and device | |
US11349658B2 (en) | Blockchain data processing method, apparatus, and device | |
CN114004639B (en) | Method, device, computer equipment and storage medium for recommending preferential information | |
CN111160251B (en) | Living body identification method and device | |
CN113673374B (en) | Face recognition method, device and equipment | |
CN116824339A (en) | Image processing method and device | |
CN114817984A (en) | Data processing method, device, system and equipment | |
EP3620942B1 (en) | Security control method and apparatus for application program, and mobile terminal and computer-readable storage medium | |
CN106487516A (en) | A kind of method of the multi-user management of mobile terminal and its device | |
CN113656842B (en) | Data verification method, device and equipment | |
CN112600886B (en) | Privacy protection method, device and equipment with combination of end cloud and device | |
CN111931152B (en) | Block chain-based electronic signature verification method and device and block chain-based electronic signature verification device and device | |
CN112560598B (en) | Living body detection method, device and equipment | |
KR20200130066A (en) | Facial recognition device and facial recognition control method | |
CN113518061B (en) | Data transmission method, equipment, device, system and medium in face recognition | |
CN112836612B (en) | Method, device and system for user real-name authentication | |
JP6716047B1 (en) | Biometric information device, distributed ledger identity verification system, and program | |
CN113762055A (en) | Image processing method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40058793 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |