CN112364803B - Training method, terminal, equipment and storage medium for living body identification auxiliary network - Google Patents

Training method, terminal, equipment and storage medium for living body identification auxiliary network Download PDF

Info

Publication number
CN112364803B
CN112364803B CN202011307025.8A CN202011307025A CN112364803B CN 112364803 B CN112364803 B CN 112364803B CN 202011307025 A CN202011307025 A CN 202011307025A CN 112364803 B CN112364803 B CN 112364803B
Authority
CN
China
Prior art keywords
face
angle
network
living body
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011307025.8A
Other languages
Chinese (zh)
Other versions
CN112364803A (en
Inventor
朱鑫懿
魏文应
安欣赏
张伟民
李革
张世雄
李楠楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Original Assignee
Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Instritute Of Intelligent Video Audio Technology Longgang Shenzhen filed Critical Instritute Of Intelligent Video Audio Technology Longgang Shenzhen
Priority to CN202011307025.8A priority Critical patent/CN112364803B/en
Publication of CN112364803A publication Critical patent/CN112364803A/en
Application granted granted Critical
Publication of CN112364803B publication Critical patent/CN112364803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a training method, a terminal, equipment and a storage medium of a living body identification auxiliary network, which are used for preparing training data; inputting training data into a feature extraction network to obtain face features; respectively inputting the face features into a classification network and an angle detection network to obtain classification results and face angle predicted values; and calculating the difference between the predicted angle and the label angle, distributing different weights to the loss function values of different samples, calculating the loss function values, and then adopting a gradient descent algorithm to update the model parameters. The auxiliary network in the multiple branches is trained by utilizing the angle information, so that the extraction capability of the model to the face features under different angles can be improved, and the adaptability of the model under different face angles is enhanced; the deep learning framework is adopted, the living body recognition training effect is improved by utilizing the angle information output by the auxiliary network, and the size and the recognition time of the actual living body recognition model are not increased.

Description

Training method, terminal, equipment and storage medium for living body identification auxiliary network
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a training method, a terminal, equipment and a storage medium of a living body recognition auxiliary network.
Background
Currently, image recognition technology, especially face recognition, is a long-term research hotspot in the field of computer vision. The development of deep learning and mass data in big data age make the face recognition technology surpass the traditional face recognition algorithm. The improvement of hardware equipment makes the face recognition technology popular in various fields, such as mobile phone face unlocking, company face attendance checking and mobile payment face verification. However, the face recognition technology has potential safety hazards that identity information is stolen, and lawbreakers can perform identity verification through fake living face information and perform illegal activities such as stealing property, endangering public security and the like after the identity verification.
There are many face living body recognition algorithms based on deep learning, and the algorithms adopt the same importance degree for different face angles in the training process, however, the learning difficulty of the face information of different angles is different. In particular, in the case of unbalanced data amount of different angle data, the general training method has a worse living body recognition effect than the face data in the case of non-face. The influence of the turning angle is relieved by adopting a face alignment method generally, but a good effect cannot be obtained on a pitch angle and a deflection angle, and one more step is added when the face alignment is added, so that the time consumption of a process is increased. Therefore, a new face recognition assisting network and training method are needed.
Through the above analysis, the problems and defects existing in the prior art are as follows:
(1) In the case of unbalanced data amount of different angle data, the general training method has a worse living body recognition effect than the face data in the case of non-face.
(2) The influence of the turning angle is relieved by adopting the face alignment method, but a good effect cannot be obtained on the pitch angle and the deflection angle, one more step is added when the face alignment is added, and the time consumption of the process is increased.
The difficulty of solving the problems and the defects is as follows:
the general method for solving the problem (1) is to add a large amount of multi-angle picture data, but the front face pictures acquired in the actual scene are mostly, and the data are difficult to acquire.
Even if multi-angle data of a certain data amount are acquired, the existing general method adopts the same weight for the data of different angles, and the learning difficulty of the data of different angles is different in practice, so that ideal effects are difficult to obtain for the face of partial angles.
The meaning of solving the problems and the defects is as follows:
according to the invention, a large amount of multi-angle picture data do not need to be acquired, a better effect can be obtained under the condition that the amount of face data at different angles is reduced, and the difficulty in acquiring the data is reduced.
For data with poor effect under a specific angle, the method can obtain better effect.
The face alignment link is not required to be added, the steps of the system flow are reduced, and the recognition speed of the system is increased.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a training method, a terminal, equipment and a storage medium of a living body identification auxiliary network.
The invention is realized in such a way that the training method of the living body identification auxiliary network based on the angle information comprises the following steps:
step one, preparing training data, wherein the training data comprises face pictures to be recognized and labels of the face pictures, and the labels comprise face types and face angle information.
Step two, training data are input into a feature extraction network to obtain face features.
And thirdly, respectively inputting the face features into a classification network and an angle detection network to obtain classification results and face angle predicted values.
And step four, calculating the difference between the predicted angle and the label angle, distributing different weights to the loss function values of different samples, calculating the loss function values, and then adopting a gradient descent algorithm to update the model parameters.
In the first step, the face picture is a face-centered picture, the face position is located at the center of the picture, and the picture is denoted as G R The acquisition method adopts a face correction method.
In the first step, the label corresponding to the face picture includes category and face angle information, the category includes living body and non-living body, wherein the label category is marked as y', the value of the label category is 0 or 1,0 represents non-living body, and 1 represents living body.
In the first step, the face angle information is obtained by estimating three-dimensional rotation angles of the face, namely pitch angle pitch, yaw angle yaw and roll angle roll, using face key points, and is denoted as (θ 1 ′,θ 2 ′,θ 3 '). The detection of key points of the human face adopts a deep learning method, and the calculation of angles adopts an affine transformation method.
In the second step, feature extraction is performed on the face image by using the deep learning model, and the obtained face feature score F is obtained, wherein the feature extraction network adopts any one of a main network structure ResNet, efficientNet, mobileNet and a SqueezeNet.
Further, in the third step, the method for obtaining the classification result and the face angle predicted value includes:
inputting the face features F into a classification network and an angle detection network, and outputting a classification result y by the classification network, wherein the range is [0,1 ]]The classification result with a value close to 0 is non-living, and the classification result with a value close to 1 is living. The angle detection network outputs the angle information of the human face, which is marked as (theta) 123 )。
Further, in the fourth step, the calculation expression of the loss function L based on the angle information is as follows:
wherein N is the total number of training samples, N is the nth sample, C is 3, corresponding to three face angles,value of c-th angle for nth sample of network output, +.>The value of the c-th angle of the nth sample in the actual label, y n For the classification result output by the network, y n0 Is the classification result of the actual label. Wherein (y) n -y n0 ) 2 The conventional mean square error loss function can be replaced by a cross entropy loss function.
Further, in step four, according to the difference between the angle predicted by the network and the true angleCalculate the balance factor of its corresponding angle +.>Realizing that data at different angles adopt different weights according to learning conditions; and finally updating model parameters by a gradient descent method in deep learning.
Another object of the present invention is to provide a network structure implementing a training method of a living body recognition assisting network based on angle information, the network structure including:
and (3) inputting a picture module: the face image is used for acquiring the position of a face in the acquired picture through a face detection model and cutting out the face image;
face feature extraction network: the face extraction method is used for extracting face features in the face image;
the face feature module: the method is used for extracting the face visual information;
classification network: for classifying using visual information in the facial features;
angle detection network: the method comprises the steps of extracting angle-related information in face features, and predicting the angle of the face;
a classification loss function value module: the weight of the sample error is adaptively adjusted according to the difference between the predicted angle information and the angle information of the label;
face angle information module: and the three rotation angles are used for predicting the face image and comprise a pitch angle, a yaw angle and a flip angle.
Another object of the present invention is to provide an information data processing terminal for implementing the training method of the in-vivo identification assistance network based on angle information.
It is a further object of the present invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the method of training a vital sign recognition assistance network based on angle information.
It is another object of the present invention to provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the method of training a living body recognition assisting network based on angle information.
By combining all the technical schemes, the invention has the advantages and positive effects that: according to the training method of the living body identification auxiliary network based on the angle information, a multi-branch living body identification network is designed based on the face angle information, the auxiliary network in the multi-branch is used for adopting a self-adaptive weight method to samples of different face angles by utilizing the angle information in the training process, and the extraction capacity of a model to face features under different angles is improved, so that the adaptability of the model under different face angles is enhanced.
The invention adopts a deep learning framework, designs an auxiliary network, and is only used when a model is trained, the auxiliary network can be removed when the model is actually applied, and the auxiliary network in multiple branches is trained by utilizing angle information and is used for improving the accuracy of living body identification of non-face data; the angle information output by the auxiliary network is utilized to improve the living body recognition training effect, so that the size and the recognition time of the actual living body recognition model are not increased.
In the training process of the living body recognition model, a multi-branch network structure is adopted, the human face angle information is used for assisting in training, and the recognition accuracy of non-positive face data is improved under the condition that the data quantity of different human face angles is unbalanced. The auxiliary network is only used during training, and the auxiliary network is removed during actual recognition, so that the calculation amount during actual application of the model is not increased. Compared with the general method, the method has higher accuracy of living body identification on the non-positive face data.
The technical effect or experimental effect of comparison. Comprising the following steps:
table 1 shows the comparison of the recognition speed and recognition accuracy of different methods under the same ARM chip.
Table 1 comparison of recognition accuracy and recognition speed
Recognition speed (millisecond) Face accuracy (%) Non-face accuracy (%)
General procedure 142 99.6 80.3
Face alignment and general method 175 99.6 87.6
Method based on angle information 143 99.5 89.6
It can be seen from table 1 that the non-frontal face recognition accuracy is higher than other methods under the condition that the recognition speed is faster and the frontal face accuracy is similar.
Drawings
Fig. 1 is a flowchart of a training method of an in-vivo identification auxiliary network based on angle information according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a training method of an in-vivo identification auxiliary network based on angle information according to an embodiment of the present invention.
Fig. 3 is a network configuration diagram of a training method of an in-vivo identification auxiliary network based on angle information according to an embodiment of the present invention;
in the figure: 1. inputting a picture module; 2. a face feature extraction network; 3. a face feature module; 4. a classification network; 5. an angle detection network; 6. a classification loss function value module; 7. and a face angle information module.
Fig. 4 is a schematic diagram of the face living body recognition method provided by the method in use in a safe.
Detailed Description
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the present invention will be described in more detail by specific embodiments with reference to the accompanying drawings. It is evident that the drawings are only some examples of the invention, from which other embodiments of the invention can be obtained without inventive effort for a person skilled in the art.
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problems existing in the prior art, the invention provides a training method, a terminal, equipment and a storage medium of a living body identification auxiliary network, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the training method of the living body identification auxiliary network based on angle information provided by the embodiment of the invention comprises the following steps:
s101, preparing training data, wherein the training data comprises face pictures to be recognized and labels of the face pictures, and the labels comprise face types and face angle information.
S102, inputting training data into a feature extraction network to obtain face features.
S103, inputting the face features into a classification network and an angle detection network respectively to obtain classification results and face angle predicted values.
S104, calculating the difference between the predicted angle and the label angle, distributing different weights to the loss function values of different samples, calculating the loss function values, and then adopting a gradient descent algorithm to update the model parameters.
The invention is further described below with reference to examples.
Compared with the existing method, the method provided by the invention has the main improvement that: the invention provides a training method of a multi-branch network structure, which utilizes face angle information to carry out auxiliary training.
The principle of the invention is as follows: 1. ) The living body recognition problem is thinned into a multi-angle face feature learning problem, and the data volume and the learning difficulty under each angle are different. The data volume of the front face is large, and the model is easy to learn the characteristics under the front face. The data volume of the non-positive face is small, the data of various angles are unbalanced, and the learning difficulty of the face features of different angles is different. 2. ) An auxiliary network is added in a general feature extraction and classification network, the network utilizes the extracted features to conduct face angle prediction, and the accuracy of angle prediction can reflect the extraction capability of the feature extraction network on face angle information. 3. ) In the trained loss function, different weights are adopted for different training samples according to the accuracy of the angle information predicted by the auxiliary network. That is, the loss function value is increased for samples with larger differences between the predicted angle and the actual angle, and the loss value is decreased for samples with smaller differences. The loss function value of a certain sample reflects the learning degree of the sample, the learning degree of a sample with a large loss function value is poor, and the learning degree of a sample with a small loss function value is good.
The training method of the living body identification auxiliary network based on the angle information comprises four parts: preparing training data, wherein the training data comprises face pictures and corresponding labels, and the labels comprise face types and face angle information; inputting training data into a feature extraction network to obtain face features; the face features are respectively input into a classification network and an angle detection network to obtain classification results and face angle predicted values; and calculating the difference between the predicted angle and the label angle, distributing different weights to the classification loss function values of different samples, and then updating model parameters by adopting a gradient descent algorithm in deep learning (the gradient descent algorithm is a general method in the deep learning). As shown in fig. 2, in the training method of the living body recognition auxiliary network based on angle information, the steps from the preparation of picture training data to the output result and updating of model parameters include the following steps:
step one, preparing training data: the training data comprises a face picture and a label, the face picture is a picture with a centered face, the position of the face is positioned in the center of the picture, and the picture is marked as G R The acquisition method can be a general face correction method. The label corresponding to one face picture comprises categories (living body and non-living body) and face angle information, wherein the label category is marked as y', the value of the label is 0 or 1,0 represents the non-living body, and 1 represents the living body. The face angle information adopts the key points of the face to estimate the three-dimensional rotation angle of the face, namely pitch angle (pitch), yaw angle (yaw) and roll angle (roll), and is recorded as (theta) 1 ′,θ 2 ′,θ 3 '). The key point detection of the human face can adopt a general deep learning method and the calculation of anglesAn affine transformation method can be adopted, and the invention does not limit the acquisition method.
Step two, training pictures to extract facial features: the feature extraction is carried out on the face image by using the deep learning model, and the obtained face feature record F is not limited to the structure of the feature extraction network, and can adopt mainstream network structures such as ResNet, efficientNet, mobileNet, squezeNet and the like.
Step three, obtaining classification loss and face angles: inputting the face features F into a classification network and an angle detection network, and outputting a classification result y by the classification network, wherein the range of the classification result y is [0,1 ]]The classification result with a value close to 0 is non-living, and the classification result with a value close to 1 is living. The angle detection network outputs the angle information of the human face, which is marked as (theta) 123 )。
Step four, calculating a loss function value according to the output angle information and updating model parameters: the loss function L based on angle information designed by the invention is shown as a calculation expression (1):
wherein N is the total number of training samples, N is the nth sample, C is 3, corresponding to three face angles,value of c-th angle for nth sample of network output, +.>The value of the c-th angle of the nth sample in the actual label, y n For the classification result output by the network, y n0 Is the classification result of the actual label. Wherein (y) n -y n0 ) 2 The conventional mean square error loss function can be replaced by a cross entropy loss function. The invention is improved on the basis of this, according to the difference between the angle predicted by the network and the true angle +.>Calculate the balance 444 +.>Therefore, different weights are adopted for the data of different angles according to the learning conditions. And finally updating model parameters by a gradient descent method in deep learning.
The network structure diagram of the training method of the living body identification auxiliary network based on the angle information provided by the embodiment of the invention is shown in fig. 3. Comprising the following steps: the face image processing system comprises an input picture module 1, a face feature extraction network 2, a face feature module 3, a classification network 4, an angle detection network 5, a classification loss function value module 6 and a face angle information module 7.
Input picture module 1: and acquiring the position of a human face from the acquired pictures through a human face detection model and cutting out the human face pictures.
Face feature extraction network 2: a convolutional neural network is used for extracting face features in an image.
Face feature module 3: the data extracted by the neural network contains abundant human face visual information.
Classification network 4: a neural network for classifying by using visual information in face features, for example, a real face or a virtual dummy face is distinguished by using different visual information presented by the real face and the face in paper under the same illumination.
Angle detection network 5: and extracting angle-related information in the face characteristics, and predicting the angle of the face.
The classification loss function value module 6: the loss function based on the angle information, which is provided in the step four, is used for measuring the error of the training sample, and the weight of the sample error is adaptively adjusted according to the difference between the predicted angle information and the angle information of the label.
Face angle information module 7: three rotational angles of the face are predicted, namely pitch angle (pitch), yaw angle (yaw) and roll angle (roll).
The method is used for evaluating training in a living body identification data set Siw, celebA_Spoof and data of a self-acquisition application scene, wherein the evaluation mode is to calculate the accuracy of the face and the non-face. For the output of the classification network, a threshold of 0.5 is set, and a value greater than 0.5 is denoted as a living body and a value less than 0.5 is denoted as a non-living body. The method provided by the invention is superior to the common method under the condition of non-positive face, and the evaluation result is shown in table 1.
Table 1 evaluation results
Face accuracy (%) Non-face accuracy (%)
General procedure 99.6 80.3
Method based on angle information 99.5 89.6
The embodiment of the invention can be realized on edge equipment by combining a deep learning Inference technology (information), which is a technology for deploying a deep learning model to hardware equipment, wherein the hardware equipment provides calculation acceleration for the model, and compared with high-cost equipment (GPU cloud server), the hardware equipment (ARM chip) can acquire real-time calculation speed at lower cost; the edge device refers to the device which can be separated from the network to perform offline computing, and has the advantages that the computing is independent of a server and a network environment, the data storage and the computing are operated locally, the data storage and the computing are not required to be transmitted to a service provider, and the privacy security of a user can be ensured.
The following describes the implementation of the present invention by taking an intelligent safe as an example. The intelligent safe carries out multiple verification through password verification, living body identification and face recognition, and has higher safety, wherein the living body identification is the method used by the invention. The technology used by the intelligent safe is calculated on off-line edge equipment, is independent of network and cloud identification service, and data cannot be intercepted and stolen due to network transmission, so that the intelligent safe has higher privacy security. Fig. 4 is a schematic diagram of the face living body recognition method provided by the method in use in a safe, including the following steps:
step 401: the user enters the safe password for the first verification.
Step 402: for a user who passes the first verification, the living body identification calculation is performed offline on an edge device integrated in the safe.
Step 403: face comparison is carried out on faces passing through living body recognition in a local face library, and calculation of the face comparison is carried out on equipment offline. The match successfully opens the safe.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When used in whole or in part, is implemented in the form of a computer program product comprising one or more computer instructions. When loaded or executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.

Claims (9)

1. The training method of the living body identification auxiliary network based on the angle information is characterized by comprising the following steps of:
preparing training data, and collecting training data of face pictures, face categories and face angles to be recognized;
inputting the training data to a feature extraction network to obtain face features;
the face features are respectively input into a classification network and an angle detection network to obtain classification results and face angle predicted values;
calculating the difference between the predicted angle in the face angle predicted value and the label angle in the classification result, distributing different weights to the loss function values of different samples, calculating the loss function value, and then adopting a gradient descent algorithm to update the loss function parameters, in particular:
the calculation expression of the loss function L based on the angle information is as follows:
where N is the total number of training samples, N is the nth sample,c is taken to be 3, which corresponds to three face angles,value of c-th angle for nth sample of network output, +.>Value of c-th angle for nth sample in actual label, +.>For the classification result of the network output, +.>The classification result is the classification result of the actual label; wherein->The method is a traditional mean square error loss function, and can be replaced by a cross entropy loss function;
difference from true angle based on network predicted angleCalculating the balance factor of the corresponding angle) The data of different angles are subjected to different weights according to the learning conditions; and finally updating model parameters by a gradient descent method in deep learning.
2. The training method of the in-vivo identification auxiliary network based on the angle information as claimed in claim 1, wherein the face picture is a face-centered picture, the face position is located at the center of the picture, and the picture is recorded asThe acquisition method adopts a face correction method.
3. As claimed inThe training method of the living body recognition auxiliary network based on the angle information as set forth in claim 1, wherein the label corresponding to the face picture comprises a category and face angle information, the category comprises living bodies and non-living bodies, and the category of the label is recorded asThe value of the tag class is 0 or 1,0 represents a non-living body, and 1 represents a living body.
4. The training method of the in-vivo identification auxiliary network based on angle information as claimed in claim 1, wherein the face angle information adopts face key points to estimate three-dimensional rotation angles of the face, namely pitch angle, yaw angle yaw and roll angle, which are recorded as @) The method comprises the steps of carrying out a first treatment on the surface of the The detection of key points of the human face adopts a deep learning method, and the calculation of angles adopts an affine transformation method.
5. The training method of the in-vivo identification auxiliary network based on angle information as claimed in claim 1, wherein the feature extraction is performed on the face image by using a deep learning model, and the obtained face feature is recordedThe structure of the feature extraction network adopts any one of a main stream network structure ResNet, efficientNet, mobileNet and a SqueezeNet;
the method for obtaining the classification result and the face angle predicted value comprises the following steps: inputting face featuresInto the classification network and the angle detection network, the classification network outputs classification results->In the range of [0,1 ]]A value of 0 indicates that the result of classification is non-living, and a value of 1 indicatesThe class result is living body; the angle detection network outputs the angle information of the human face, which is marked as (/ -)>)。
6. A network structure, characterized in that the network structure implements the training method of the living body identification auxiliary network based on angle information according to any one of claims 1 to 5, the network structure comprising:
and (3) inputting a picture module: the face image is used for acquiring the position of a face in the acquired picture through a face detection model and cutting out the face image;
face feature extraction network: the face extraction method is used for extracting face features in the face image;
the face feature module: the method is used for extracting the face visual information;
classification network: for classifying using visual information in the facial features;
angle detection network: the method comprises the steps of extracting angle-related information in face features, and predicting the angle of the face;
a classification loss function value module: the weight of the sample error is adaptively adjusted according to the difference between the predicted angle information and the angle information of the label;
face angle information module: and the three rotation angles are used for predicting the face image and comprise a pitch angle, a yaw angle and a flip angle.
7. An information data processing terminal, characterized in that the information data processing terminal is configured to implement the training method of the living body identification auxiliary network based on angle information according to any one of claims 1 to 5.
8. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program, which when executed by the processor, causes the processor to execute the training method of the living body identification assisting network based on angle information according to any one of claims 1 to 6.
9. A computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the method for training a living body recognition assistance network based on angle information according to any one of claims 1 to 6.
CN202011307025.8A 2020-11-20 2020-11-20 Training method, terminal, equipment and storage medium for living body identification auxiliary network Active CN112364803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011307025.8A CN112364803B (en) 2020-11-20 2020-11-20 Training method, terminal, equipment and storage medium for living body identification auxiliary network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011307025.8A CN112364803B (en) 2020-11-20 2020-11-20 Training method, terminal, equipment and storage medium for living body identification auxiliary network

Publications (2)

Publication Number Publication Date
CN112364803A CN112364803A (en) 2021-02-12
CN112364803B true CN112364803B (en) 2023-08-11

Family

ID=74532645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011307025.8A Active CN112364803B (en) 2020-11-20 2020-11-20 Training method, terminal, equipment and storage medium for living body identification auxiliary network

Country Status (1)

Country Link
CN (1) CN112364803B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990057A (en) * 2021-03-26 2021-06-18 北京易华录信息技术股份有限公司 Human body posture recognition method and device and electronic equipment
CN113591782A (en) * 2021-08-12 2021-11-02 北京惠朗时代科技有限公司 Training-based face recognition intelligent safety box application method and system
CN113705425B (en) * 2021-08-25 2022-08-16 北京百度网讯科技有限公司 Training method of living body detection model, and method, device and equipment for living body detection
CN114333011B (en) * 2021-12-28 2022-11-08 合肥的卢深视科技有限公司 Network training method, face recognition method, electronic device and storage medium
CN114511923A (en) * 2021-12-30 2022-05-17 北京奕斯伟计算技术有限公司 Motion recognition method and computing device
CN114360074A (en) * 2022-01-10 2022-04-15 北京百度网讯科技有限公司 Training method of detection model, living body detection method, apparatus, device and medium
CN118116061A (en) * 2024-04-30 2024-05-31 深圳深云智汇科技有限公司 Image processing system based on personnel identification

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506702A (en) * 2017-08-08 2017-12-22 江西高创保安服务技术有限公司 Human face recognition model training and test system and method based on multi-angle
WO2019091271A1 (en) * 2017-11-13 2019-05-16 苏州科达科技股份有限公司 Human face detection method and human face detection system
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function
CN110490067A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 A kind of face identification method and device based on human face posture
CN111274848A (en) * 2018-12-04 2020-06-12 北京嘀嘀无限科技发展有限公司 Image detection method and device, electronic equipment and storage medium
CN111695522A (en) * 2020-06-15 2020-09-22 重庆邮电大学 In-plane rotation invariant face detection method and device and storage medium
CN111860031A (en) * 2019-04-24 2020-10-30 普天信息技术有限公司 Face pose estimation method and device, electronic equipment and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191655B (en) * 2018-11-14 2024-04-16 佳能株式会社 Object identification method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506702A (en) * 2017-08-08 2017-12-22 江西高创保安服务技术有限公司 Human face recognition model training and test system and method based on multi-angle
WO2019091271A1 (en) * 2017-11-13 2019-05-16 苏州科达科技股份有限公司 Human face detection method and human face detection system
CN111274848A (en) * 2018-12-04 2020-06-12 北京嘀嘀无限科技发展有限公司 Image detection method and device, electronic equipment and storage medium
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function
CN111860031A (en) * 2019-04-24 2020-10-30 普天信息技术有限公司 Face pose estimation method and device, electronic equipment and readable storage medium
CN110490067A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 A kind of face identification method and device based on human face posture
CN111695522A (en) * 2020-06-15 2020-09-22 重庆邮电大学 In-plane rotation invariant face detection method and device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
龙鑫 等.一种基于角度距离损失函数和卷积神经网络的人脸识别算法.激光与光电子学进展.2018,全文. *

Also Published As

Publication number Publication date
CN112364803A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN112364803B (en) Training method, terminal, equipment and storage medium for living body identification auxiliary network
CN113688855B (en) Data processing method, federal learning training method, related device and equipment
US20210056293A1 (en) Face detection method
CN105426870B (en) A kind of face key independent positioning method and device
CN107545241A (en) Neural network model is trained and biopsy method, device and storage medium
CN107590430A (en) Biopsy method, device, equipment and storage medium
CN105069622B (en) A kind of face recognition payment system and method for facing moving terminal
CN108875833A (en) Training method, face identification method and the device of neural network
CN106250821A (en) The face identification method that a kind of cluster is classified again
CN109101602A (en) Image encrypting algorithm training method, image search method, equipment and storage medium
US20230087657A1 (en) Assessing face image quality for application of facial recognition
JP2022521038A (en) Face recognition methods, neural network training methods, devices and electronic devices
CN111783629B (en) Human face in-vivo detection method and device for resisting sample attack
TWI712980B (en) Claim information extraction method and device, and electronic equipment
CN108549854A (en) A kind of human face in-vivo detection method
JP2022141931A (en) Method and device for training living body detection model, method and apparatus for living body detection, electronic apparatus, storage medium, and computer program
US11126827B2 (en) Method and system for image identification
CN107679860A (en) A kind of method, apparatus of user authentication, equipment and computer-readable storage medium
CN107563283A (en) Method, apparatus, equipment and the storage medium of generation attack sample
CN108564040B (en) Fingerprint activity detection method based on deep convolution characteristics
CN107609462A (en) Measurement information generation to be checked and biopsy method, device, equipment and storage medium
CN113515988B (en) Palm print recognition method, feature extraction model training method, device and medium
CN104036254A (en) Face recognition method
US10423817B2 (en) Latent fingerprint ridge flow map improvement
CN111444826A (en) Video detection method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant