CN113361422A - Face recognition method based on angle space loss bearing - Google Patents

Face recognition method based on angle space loss bearing Download PDF

Info

Publication number
CN113361422A
CN113361422A CN202110653086.8A CN202110653086A CN113361422A CN 113361422 A CN113361422 A CN 113361422A CN 202110653086 A CN202110653086 A CN 202110653086A CN 113361422 A CN113361422 A CN 113361422A
Authority
CN
China
Prior art keywords
face
feature
data set
face data
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110653086.8A
Other languages
Chinese (zh)
Inventor
练智超
李婷婷
陈墨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Chengshi Technology Co ltd
Original Assignee
Zhejiang Chengshi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Chengshi Technology Co ltd filed Critical Zhejiang Chengshi Technology Co ltd
Priority to CN202110653086.8A priority Critical patent/CN113361422A/en
Publication of CN113361422A publication Critical patent/CN113361422A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to the technical field of building board processing, and particularly discloses a face recognition method based on angular space loss bearing, which comprises the steps of reading a face data set, preprocessing the face data set, and acquiring a face image in the face data set; generating a network LResNe based on a depth residual error module, and extracting facial features in the facial image based on the network LResNe; generating a loss function of a self-adaptive interval under an angle space, and performing classification prediction and loss calculation on the human face features based on the loss function; training a classifier model, generating a feature extraction model, and verifying and identifying the face data based on the feature extraction model. The invention can realize the real-time face recognition function in video materials with different resolutions, can be applied to the fields of safety monitoring, intelligent payment and the like, and has strong recognition capability, wide application range and convenient popularization.

Description

Face recognition method based on angle space loss bearing
Technical Field
The invention relates to the technical field of building board processing, in particular to a face recognition method based on angle space loss bearing.
Background
Computer-based processing and analysis of images is a major goal of machine vision tasks and is also a very challenging task. The human visual system can quickly analyze image information captured by eyes and combine multi-layer brain neuron analysis to identify face images. A computer is utilized to combine with a Deep Convolutional Neural Network (DCNN) which is developed rapidly in recent years, so that the extraction and analysis of image information can be realized by simulating a human brain, the mapping transformation from a characteristic image to an original image pixel by pixel is realized, and the classification and the identification of a face image are realized. The method has extremely important research significance in the fields of intelligent robots, safety monitoring, company attendance checking, intelligent payment and the like.
According to the development process of face recognition, the method can be divided into two categories based on traditional feature extraction and deep learning. The traditional feature extraction method is characterized in that a face feature template is established first, and then matching is carried out. The Wanyuan et al provides a layered feature fusion method of LBP and HOG, and improves the recognition rate by utilizing the complementarity of texture information and edge contour information. Xian et al extracts a plurality of face features by using an SIFT method, constructs a face feature library, calculates robustness features for recognition by performing dot product on each feature vector in the feature library and other vectors, and finally improves the recognition effect by using a weighting strategy of the SIFT features. The performance of the above algorithm depends on the quality of the acquired image to a large extent, so that the algorithm has the problems of sensitivity to light changes, sensitivity to blocking interference and the like.
Deep learning occupies a place in machine vision with its excellent characteristic characterization performance. Deep learning obtains deep human face features by constructing a convolutional neural network with a deep structure, and has stronger generalization capability. The deep learning-based method firstly learns high-level human face features from mass data by using a deep convolutional neural network, and then designs a classifier for classification. The high accuracy of this kind of method mainly depends on the expression ability of the network structure, the scale of the data set, and the preprocessing result of the training set. The deep ID face recognition algorithm is that the powerful learning and classifying capabilities of the neural network are utilized, the face features with more generalization capability are automatically learned from data, the recognition signals and the verification signals are combined in a weighting mode, and the recognition accuracy of the face recognition technology exceeds the recognition capability of human eyes in an unlimited scene for the first time. GoogleNet approximates an optimal local sparse structure by building dense components through modular thinking, which not only improves performance but also controls the increase of computation. FaceNet provides a triple Loss-based face recognition method, and the separability among features is directly learned in a triple construction mode, so that the feature distance among the same identities is reduced, the feature distance among different identities is increased, and the recognition performance of a large-scale face data set is improved. NormFace reformulates metric learning by optimizing cosine similarity of a SoftMax loss function. Weiyang Liu et al propose a completely new loss function (A-SoftMax) based on this, which has a good effect in the MageFace challenge. Dungjiancang et al improve the feature vector normalization on the basis of A-SoftMax, increase the additive angle interval, and propose ArcFace (additive Angular Margin loss), improve the inter-class separability and enhance the intra-class compactness, do not need to realize joint supervision with other loss functions, and can easily converge on any training data set. Meanwhile, the ArcFace also optimizes the Resnet-50 network and parameters, and improves training efficiency and recognition performance. However, in the aspect of geometric edge distance determination, due to the non-monotonicity of ArcFace to the target logits curve, there is a partial overlapping region between different classes, which may cause misclassification.
Disclosure of Invention
The invention aims to provide a face recognition method based on angle space loss bearing to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of face recognition based on angular spatial loss bearing, the method comprising:
reading a face data set, and preprocessing the face data set to obtain a face image in the face data set;
generating a network LResNe based on a depth residual error module, and extracting facial features in the facial image based on the network LResNe;
generating a loss function of a self-adaptive interval under an angle space, and performing classification prediction and loss calculation on the human face features based on the loss function;
training a classifier model, generating a feature extraction model, and verifying and identifying the face data based on the feature extraction model.
As a further scheme of the invention: the step of reading the face data set, preprocessing the face data set and acquiring the face image in the face data set comprises the following steps:
carrying out data cleaning on the face data set, and screening out dirty data;
carrying out face detection based on an MTCNN algorithm to obtain five feature points of a left eye, a right eye, a nose tip, a left mouth corner and a right mouth corner of a face;
carrying out affine transformation on the picture based on the five face feature points, rotating the picture by a certain angle to enable two eyes to be in the horizontal direction, and cutting the rotated face picture.
As a further scheme of the invention: the step of generating a network LResNe based on a depth residual module and extracting facial features in the facial image based on the network LResNe comprises:
performing convolution operation based on the convolution kernel size of 3 multiplied by 3 and the step pitch of 2 to obtain shallow layer characteristics of the image and generate a characteristic layer;
performing invalid feature suppression on the feature layer through a convolution attention mechanism module;
performing maximum pooling operation on the feature layer after invalid feature suppression based on a pooling layer with the size of 3 multiplied by 3 and the step pitch of 2;
determining a residual error structure, and processing the feature layer after the maximum pooling operation through the residual error structure;
carrying out average pooling operation on the processed feature graph, and adding a full connection layer of 128 nodes; the full connection layer is used as a 128-dimensional feature representation of an input face image;
wherein, the feature mapping realized by the residual block is as follows:
Figure BDA0003111670920000031
x is the input feature map of the residual block, F (X) represents the feature mapping function implemented by the residual block,
Figure BDA0003111670920000032
representing the output profile after passing through a residual block, which allows the network to converge more quickly.
As a further scheme of the invention: the generating process of the loss function in the step of generating the adaptive interval loss function in the angle space and carrying out classification prediction and loss calculation on the human face features based on the loss function is as follows:
the prediction probability calculation formula is as follows:
Figure BDA0003111670920000033
where i represents the input sample,
Figure BDA0003111670920000034
representing the sample features x extracted in the previous stepiClass y corresponding theretoiWeight vector of
Figure BDA0003111670920000035
The angle between the two (C) and the angle between the two (C) are determined,
Figure BDA0003111670920000036
sdfor the dynamically adjustable scale parameter, the dynamic scale parameter of the t-th iteration
Figure BDA0003111670920000037
Can be expressed as:
Figure BDA0003111670920000038
wherein C represents the total class number of the sample,
Figure BDA0003111670920000039
is a median angular variable representing the angles of all corresponding classes of a small batch of size N for the t-th iteration
Figure BDA00031116709200000310
The median value of (a) can roughly represent the optimization degree of the current network on the small batch, the initial value is set to be pi/4,
Figure BDA00031116709200000311
is composed of
Figure BDA00031116709200000312
The initial value is set to C-1.
The loss function is:
Figure BDA0003111670920000041
compared with the prior art, the invention has the beneficial effects that: according to the method, the LResNet residual error neural network which is easy to train is used as the feature extractor, the image representation capability of the model is improved, convergence is easy, the face recognition accuracy is guaranteed, and meanwhile the model training speed is improved; combining the condition that the human face samples are distributed unevenly, providing a loss function with self-adaptive angle intervals on the basis of an additive angle interval loss function so as to achieve the effect of dynamically compressing each category; the real-time face recognition function can be realized in video materials with different resolutions, and the method can be applied to the fields of safety monitoring, intelligent payment and the like.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flowchart of a face recognition method based on an angle space loss function according to the present invention.
Fig. 2 is a block diagram of a convolutional neural residual network LResNet.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Referring to fig. 1-2, in an embodiment of the present invention, a face recognition method based on angular space loss bearing is provided, where the method includes steps S1 to S4:
step S1: preprocessing a CASIA-Webface face data set to obtain a face image in the data set;
step S2: designing a network LResNet based on a depth residual error module to extract features;
step S3: designing a loss function of a self-adaptive interval under an angle space, and performing classification prediction and loss calculation on the face features;
step S4: and training the classifier model to obtain a feature extraction model, and verifying and identifying the face data by using the feature extraction model.
The invention relates to a face recognition method based on an angle space loss function, which comprises an excellent residual error neural network and a loss function of a self-adaptive angle interval. Firstly, cleaning and preprocessing a CASIA-Webface data set; designing a deep convolutional neural network LResNet based on a residual error module to automatically learn convolutional characteristics with higher generalization capability from massive face data; designing a loss function Liloss of a self-adaptive interval under an angle space, automatically balancing the compression inter-class intervals of different classes, increasing the difficulty of the inter-class intervals, and performing classification prediction and loss calculation on the face features; using a loss function of a self-adaptive angle interval as a supervision signal, retraining an LResNet network on the processed face data set to obtain a feature extraction model, and intentionally randomly converting a preprocessed face sample into 5 resolution levels in the model training process to ensure that the model is adaptive to recognition tasks of pictures with different qualities; and verifying and identifying the face data by using the feature extraction model.
The specific process of preprocessing the CASIA-Webface face data set to obtain the face image in the data set in the step S1 is as follows:
carrying out data cleaning on an original CASIA-Webface face data set to remove some dirty data; carrying out face detection by using an MTCNN algorithm to obtain five feature points of a left eye, a right eye, a nose tip, a left mouth corner and a right mouth corner of the face; then carrying out affine transformation on the picture by using the five human face feature points, and rotating the picture by a certain angle to enable two eyes to be in the horizontal direction; and then the face picture is cut to 112 multiplied by 96.
Step S2 shows that designing a depth residual module-based LResNet specifically includes the following steps:
1) the input of the LresNet network firstly performs convolution operation by using a convolution kernel with the size of 3 multiplied by 3 and the step pitch of 2 to obtain shallow layer characteristics of an image, then passes a characteristic layer output by a first layer of convolution layer through a convolution attention mechanism module to enhance effective characteristics and inhibit ineffective characteristics or noise, and then performs maximum pooling operation by using a pooling layer with the size of 3 multiplied by 3 and the step pitch of 2;
2) then, 3 residual error structures consisting of a layer of convolutional layers with the depth of 64 and the size of 1 × 1 of the convolutional kernel, a layer of convolutional layers with the depth of 64 and the size of 3 × 3 of the convolutional kernel, and a layer of convolutional layers with the depth of 256 and the size of 1 × 1 of the convolutional kernel are connected in parallel, 4 residual error structures consisting of a layer of convolutional layers with the depth of 128 and the size of 1 × 1 of the convolutional kernel, a layer of convolutional layers with the depth of 128 and the size of 3 × 3 of the convolutional kernel, and a layer of convolutional layers with the depth of 512 and the size of 1 × 1 of the convolutional kernel, connected in parallel, 6 residual error structures consisting of a layer of convolutional layers with the depth of 256 and the size of 1 × 1 of the convolutional kernel, a layer with the depth of 256 and the size of 512 and a layer of 1024 and a layer of convolutional kernel, and 3 residual error structures consisting of a layer with the depth of 512 and the size of 1 × 1 of the convolutional kernel, and a layer of 512, A residual error structure formed by connecting convolution layers with convolution kernel size of 3 multiplied by 3 and convolution layers with a layer depth of 2048 and convolution kernel size of 1 multiplied by 1 in parallel;
3) the processed feature map is processed through an average pooling operation, a full connection layer of 128 nodes is added as output, and the full connection layer output at the moment is used as 128-dimensional feature representation of the input face image.
The deep convolutional neural network is realized by utilizing the residual block, so that the overfitting phenomenon caused by deepening of a network layer can be avoided, and the characteristic mapping realized by the residual block is as follows:
Figure BDA0003111670920000061
wherein X is the input feature map of the residual block, F (X) represents the feature mapping function implemented by the residual block,
Figure BDA0003111670920000062
representing the output profile after passing through a residual block, which allows the network to converge more quickly.
Step S3 is to design a loss function of adaptive interval in an angle space, and the specific processes of classifying and predicting the face features and calculating the loss are mainly some calculation processes, and the specific calculation formula is as follows:
the corresponding prediction probability calculation formula is as follows:
Figure BDA0003111670920000063
where i represents the input sample,
Figure BDA0003111670920000064
representing the sample features x extracted in the previous stepiClass y corresponding theretoiWeight vector of
Figure BDA0003111670920000065
The angle between the two (C) and the angle between the two (C) are determined,
Figure BDA0003111670920000066
sdfor the dynamically adjustable scale parameter, the dynamic scale parameter of the t-th iteration
Figure BDA0003111670920000067
Can be expressed as:
Figure BDA0003111670920000068
wherein C represents the total class number of the sample,
Figure BDA0003111670920000069
is a median angular variable representing the angles of all corresponding classes of a small batch of size N for the t-th iteration
Figure BDA00031116709200000610
The median value of (a) can roughly represent the optimization degree of the current network on the small batch, the initial value is set to be pi/4,
Figure BDA00031116709200000611
is composed of
Figure BDA00031116709200000612
The initial value is set to C-1.
The final loss function is:
Figure BDA00031116709200000613
step S4, training the classifier model to obtain a feature extraction model, and the specific process of verifying and recognizing the face data using the feature extraction model is as follows:
and (3) retraining the LResNet network proposed in the step (2) on the face data set processed in the step (1), and adopting the loss function designed in the step (3) as a supervision signal. Samples in the preprocessed large face data set CASIA-WebFace are intentionally and randomly converted into 5 resolution levels in the network training process, and the recognition rate among low-resolution face images and among different-level resolution face images can be greatly improved under the condition that a model trained by the strategy has a high recognition rate under the condition of optimal face imaging.
After the face image to be recognized is extracted through the model, the feature is generated into a 128-dimensional feature matrix, and the feature matrix is compared with a face feature matrix stored in an existing face library to obtain face similarity information.
The functions that can be realized by the above face recognition method based on the angular space loss bearing are all completed by computer equipment, the computer equipment comprises one or more processors and one or more memories, at least one program code is stored in the one or more memories, and the program code is loaded and executed by the one or more processors to realize the functions of the face recognition method based on the angular space loss bearing.
The processor fetches instructions and analyzes the instructions one by one from the memory, then completes corresponding operations according to the instruction requirements, generates a series of control commands, enables all parts of the computer to automatically, continuously and coordinately act to form an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit; the Memory comprises a Read-Only Memory (ROM) for storing a computer program, and a protection device is arranged outside the Memory.
Illustratively, a computer program can be partitioned into one or more modules, which are stored in memory and executed by a processor to implement the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the terminal device.
Those skilled in the art will appreciate that the above description of the service device is merely exemplary and not limiting of the terminal device, and may include more or less components than those described, or combine certain components, or different components, such as may include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal equipment and connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the terminal device by operating or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory mainly comprises a storage program area and a storage data area, wherein the storage program area can store an operating system, application programs (such as an information acquisition template display function, a product information publishing function and the like) required by at least one function and the like; the storage data area may store data created according to the use of the berth-state display system (e.g., product information acquisition templates corresponding to different product types, product information that needs to be issued by different product providers, etc.), and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The terminal device integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the modules/units in the system according to the above embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the functions of the embodiments of the system. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (4)

1. A face recognition method based on angular space loss bearing is characterized by comprising the following steps:
reading a face data set, and preprocessing the face data set to obtain a face image in the face data set;
generating a network LResNe based on a depth residual error module, and extracting facial features in the facial image based on the network LResNe;
generating a loss function of a self-adaptive interval under an angle space, and performing classification prediction and loss calculation on the human face features based on the loss function;
training a classifier model, generating a feature extraction model, and verifying and identifying the face data based on the feature extraction model.
2. The method for recognizing the face based on the angular space loss bearing according to claim 1, wherein the step of reading the face data set, preprocessing the face data set, and acquiring the face image in the face data set comprises:
carrying out data cleaning on the face data set, and screening out dirty data;
carrying out face detection based on an MTCNN algorithm to obtain five feature points of a left eye, a right eye, a nose tip, a left mouth corner and a right mouth corner of a face;
carrying out affine transformation on the picture based on the five face feature points, rotating the picture by a certain angle to enable two eyes to be in the horizontal direction, and cutting the rotated face picture.
3. The method of claim 1, wherein the step of generating a LResNe based depth residual module and extracting the facial features in the facial image based on the LResNe comprises:
performing convolution operation based on the convolution kernel size of 3 multiplied by 3 and the step pitch of 2 to obtain shallow layer characteristics of the image and generate a characteristic layer;
performing invalid feature suppression on the feature layer through a convolution attention mechanism module;
performing maximum pooling operation on the feature layer after invalid feature suppression based on a pooling layer with the size of 3 multiplied by 3 and the step pitch of 2;
determining a residual error structure, and processing the feature layer after the maximum pooling operation through the residual error structure;
carrying out average pooling operation on the processed feature graph, and adding a full connection layer of 128 nodes; the full connection layer is used as a 128-dimensional feature representation of an input face image;
wherein, the feature mapping realized by the residual block is as follows:
Figure FDA0003111670910000011
x is the input feature map of the residual block, F (X) represents the feature mapping function implemented by the residual block,
Figure FDA0003111670910000012
representing the output profile after passing through a residual block, which allows the network to converge more quickly.
4. The method according to claim 1, wherein the generating of the loss function of the adaptive interval in the angle space is performed by the following steps:
the prediction probability calculation formula is as follows:
Figure FDA0003111670910000021
where i represents the input sample,
Figure FDA00031116709100000210
representing the sample features x extracted in the previous stepiClass y corresponding theretoiWeight vector of
Figure FDA00031116709100000211
The angle between the two (C) and the angle between the two (C) are determined,
Figure FDA0003111670910000022
sdfor the dynamically adjustable scale parameter, the dynamic scale parameter of the t-th iteration
Figure FDA0003111670910000023
Can be expressed as:
Figure FDA0003111670910000024
wherein C represents the total class number of the sample,
Figure FDA0003111670910000025
is a median angular variable representing the angles of all corresponding classes of a small batch of size N for the t-th iteration
Figure FDA0003111670910000026
The median value of (a) can roughly represent the optimization degree of the current network on the small batch, the initial value is set to be pi/4,
Figure FDA0003111670910000027
is composed of
Figure FDA0003111670910000028
The initial value is set to C-1.
The loss function is:
Figure FDA0003111670910000029
CN202110653086.8A 2021-06-11 2021-06-11 Face recognition method based on angle space loss bearing Pending CN113361422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110653086.8A CN113361422A (en) 2021-06-11 2021-06-11 Face recognition method based on angle space loss bearing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110653086.8A CN113361422A (en) 2021-06-11 2021-06-11 Face recognition method based on angle space loss bearing

Publications (1)

Publication Number Publication Date
CN113361422A true CN113361422A (en) 2021-09-07

Family

ID=77533876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110653086.8A Pending CN113361422A (en) 2021-06-11 2021-06-11 Face recognition method based on angle space loss bearing

Country Status (1)

Country Link
CN (1) CN113361422A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792643A (en) * 2021-09-10 2021-12-14 武汉理工大学 Living body face recognition method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姬东飞: "基于自适应角度损失函数的深度人脸识别算法研究", 《计算机应用研究》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792643A (en) * 2021-09-10 2021-12-14 武汉理工大学 Living body face recognition method and system

Similar Documents

Publication Publication Date Title
CN111860670B (en) Domain adaptive model training method, image detection method, device, equipment and medium
US7912253B2 (en) Object recognition method and apparatus therefor
Han et al. Two-stage learning to predict human eye fixations via SDAEs
Pisharady et al. Attention based detection and recognition of hand postures against complex backgrounds
Oliveira et al. On exploration of classifier ensemble synergism in pedestrian detection
CN111310731A (en) Video recommendation method, device and equipment based on artificial intelligence and storage medium
Jin et al. Adversarial autoencoder network for hyperspectral unmixing
CN111178251A (en) Pedestrian attribute identification method and system, storage medium and terminal
CN108596195B (en) Scene recognition method based on sparse coding feature extraction
Fan et al. Genetic programming for feature extraction and construction in image classification
Lin et al. Determination of the varieties of rice kernels based on machine vision and deep learning technology
Dias et al. A multirepresentational fusion of time series for pixelwise classification
Huang et al. A multi-expert approach for robust face detection
Alphonse et al. A novel maximum and minimum response-based Gabor (MMRG) feature extraction method for facial expression recognition
Zarbakhsh et al. Low-rank sparse coding and region of interest pooling for dynamic 3D facial expression recognition
Sujanaa et al. Emotion recognition using support vector machine and one-dimensional convolutional neural network
CN113361422A (en) Face recognition method based on angle space loss bearing
CN113743426A (en) Training method, device, equipment and computer readable storage medium
Hsieh et al. Video-based human action and hand gesture recognition by fusing factored matrices of dual tensors
Hudec et al. Texture similarity evaluation via siamese convolutional neural network
Asad et al. Low complexity hybrid holistic–landmark based approach for face recognition
Johnson et al. A study on eye fixation prediction and salient object detection in supervised saliency
Mahmood Defocus Blur Segmentation Using Genetic Programming and Adaptive Threshold.
Yifei et al. Flower image classification based on improved convolutional neural network
Hadjkacem et al. Multi-shot human re-identification using a fast multi-scale video covariance descriptor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210907