CN113420585A - Face acquisition and recognition method, system and storage medium - Google Patents

Face acquisition and recognition method, system and storage medium Download PDF

Info

Publication number
CN113420585A
CN113420585A CN202110429920.5A CN202110429920A CN113420585A CN 113420585 A CN113420585 A CN 113420585A CN 202110429920 A CN202110429920 A CN 202110429920A CN 113420585 A CN113420585 A CN 113420585A
Authority
CN
China
Prior art keywords
face
recognized
features
position points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110429920.5A
Other languages
Chinese (zh)
Inventor
麦伟彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shengye Information Technology Co ltd
Original Assignee
Guangzhou Shengye Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shengye Information Technology Co ltd filed Critical Guangzhou Shengye Information Technology Co ltd
Priority to CN202110429920.5A priority Critical patent/CN113420585A/en
Publication of CN113420585A publication Critical patent/CN113420585A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a face acquisition and recognition method, a face acquisition and recognition system and a storage medium, wherein the method comprises the following steps: acquiring a large number of face images, extracting face features of the large number of face images, marking face position points corresponding to the face features, and storing the face position points in a face library; triggering an intelligent terminal to perform face recognition; extracting the face features of the face to be recognized; and when the distance between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library is smaller than a first set threshold value, returning the figure information corresponding to the face image. The invention can realize the rapid confirmation of the user identity information by comparing the position points of the face features of the face to be recognized with the position points of the face features of the face images in the face library.

Description

Face acquisition and recognition method, system and storage medium
Technical Field
The invention relates to the technical field of face recognition, in particular to a face acquisition and recognition method, a face acquisition and recognition system and a storage medium.
Background
At present, human face recognition is a biometric technology for identity recognition based on facial feature information of a person. With the development of its technology and the improvement of social acceptance, face recognition is now applied in many fields, for example: online and offline consumption face brushing online payment, insensitive face attendance in schools or enterprises, and insensitive face entrance and exit of community owners.
The current popular forms of scanning two-dimensional codes, fingerprint identification, IDC cards and the like to confirm the identity of a user easily cause the conditions of congestion, substitution and counterfeiting, so that the research and development of a system for confirming the identity information of the user in a noninductive, rapid, safe and reliable manner becomes a problem to be solved by technical personnel in the field.
Disclosure of Invention
In view of the above technical problems, an object of the present invention is to provide a method, a system and a storage medium for face acquisition and recognition, which can achieve a non-inductive, fast, safe and reliable confirmation of user identity information.
The invention adopts the following technical scheme:
a face acquisition and recognition method comprises the following steps:
acquiring a large number of face images, extracting face features of the large number of face images, marking face position points corresponding to the face features, and storing the face position points in a face library;
triggering an intelligent terminal to perform face recognition; extracting the face features of the face to be recognized;
acquiring position points of face features of a face to be recognized; comparing the position points of the face features of the face to be recognized with the position points of the face features of n face images in a face library one by one, wherein n is a natural number; and when the distance between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library is smaller than a first set threshold value, returning the figure information corresponding to the face image.
Further, the method also comprises the following steps: and comparing the face characteristics of the face to be recognized with the face characteristics of a certain face image in the face library, and returning the similarity between the face characteristics of the face to be recognized and the face characteristics of the face image.
Further, the method also comprises the following steps: and when the distances between the position points of the face features of the face to be recognized and the position points of the face features of all the face images in the face library are not less than a first set threshold, judging that the face recognition fails, and storing the face images captured by the current face recognition.
Further, the method also comprises the following steps: and judging whether the reason of face recognition failure has an environmental reason or not, wherein the environmental reason comprises that the illumination intensity of the environment where the intelligent terminal is located is lower than a preset intensity threshold value.
Further, the extracting the face features of the face to be recognized includes:
establishing a human face characteristic point shape driving depth model based on a convolutional neural network;
detecting a human face through a RetinaFace human face detection model, and generating multilayer detection frames with different sizes; the detection frames comprise target frames with different sizes; training the face characteristic point shape driving depth model in each layer of detection frame;
and carrying out face feature extraction and fusion by utilizing the trained face feature point shape driving depth model, wherein the fusion comprises the fusion of face feature position points and face postures.
The face feature point shape-driven depth model comprises a main network and an auxiliary sub-network, wherein the main network adopts a regional convolution network.
Further, the extracting the face features of the face to be recognized includes:
and establishing an FECNN parameter model, and sending the face feature position points and the face image into the FECNN parameter model for feature extraction to obtain the face features of the face to be recognized.
Further, the establishing the FECNN parameter model includes: constructing a plurality of convolution layers, a plurality of pooling layers, a plurality of inclusion layers, a fully-connected feature extraction layer and a softmax classification layer, wherein the plurality of convolution layers, the plurality of pooling layers and the plurality of inclusion layers are connected in sequence in a staggered manner, and then connected with the fully-connected feature extraction layer and the softmax classification layer in sequence.
A face acquisition recognition system comprising:
the system comprises a face sample acquisition module, a face database storage module and a face recognition module, wherein the face sample acquisition module is used for acquiring a large number of face images, extracting face characteristics of the large number of face images, marking face position points corresponding to the face characteristics and storing the face position points into a face database;
the face recognition module is used for carrying out face recognition; extracting the face features of the face to be recognized;
the face matching module is used for acquiring position points of face features of a face to be recognized; comparing the position points of the face features of the face to be recognized with the position points of the face features of n face images in a face library one by one, wherein n is a natural number; and when the distance between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library is smaller than a first set threshold value, returning the figure information corresponding to the face image.
A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the face acquisition recognition method.
Compared with the prior art, the invention has the beneficial effects that:
the invention can realize the rapid confirmation of the user identity information by comparing the position points of the face features of the face to be recognized with the position points of the face features of the face images in the face library.
Furthermore, the trained face feature point shape-driven depth model is used for face feature extraction and fusion, the fusion comprises fusion of face feature position points and face gestures, the face features and the gestures are predicted through the main network and the auxiliary sub-networks respectively, and accuracy of face recognition is improved.
Drawings
Fig. 1 is a schematic flow chart of a face acquisition and recognition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a face acquisition and recognition method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific embodiments, and it should be noted that, in the premise of no conflict, the following described embodiments or technical features may be arbitrarily combined to form a new embodiment:
the first embodiment is as follows:
referring to fig. 1, a face collecting and recognizing method according to the present invention is shown, including:
step S1, acquiring a large number of face images, extracting face features of the large number of face images, marking face position points corresponding to the face features, and storing the face position points in a face library;
step S2, triggering the intelligent terminal to recognize the face; extracting the face features of the face to be recognized;
optionally, the extracting the face features of the face to be recognized includes:
establishing a human face characteristic point shape driving depth model based on a convolutional neural network;
detecting a human face through a RetinaFace human face detection model, and generating multilayer detection frames with different sizes; the detection frames comprise target frames with different sizes; training the face characteristic point shape driving depth model in each layer of detection frame;
and carrying out face feature extraction and fusion by utilizing the trained face feature point shape driving depth model, wherein the fusion comprises the fusion of face feature position points and face postures.
As an example, the face feature point shape-driven depth model includes a primary network and a secondary sub-network, the primary network employing a regional convolution network.
The number of the regional convolution networks is N, the regional convolution networks are composed of convolution layers and pooling layers in the DCNN and are used for extracting human face characteristic points from N regions of a human face; wherein, N is N areas dividing the face image according to the position information of the face characteristic point.
In the present embodiment, the face may be divided into a whole face, a hair region, a right eyebrow region, a left eye region, a right eye region, a nose region, and a mouth region.
As an example, the human face feature points include at least one of two corner points of each eyebrow and a center point thereof, two corner points of each eye, upper and lower eyelid center points and an eye center point, a nose tip point, a nose apex, two nose wing points, a nose middle point, two corner points of the mouth, a mouth center point, an uppermost point of the upper lip, and a lowermost point of the lower lip.
As another embodiment, the extracting the face features of the face to be recognized may include:
and establishing an FECNN parameter model, and sending the face feature position points and the face image into the FECNN parameter model for feature extraction to obtain the face features of the face to be recognized.
The establishing of the FECNN parameter model comprises the following steps: constructing a plurality of convolution layers, a plurality of pooling layers, a plurality of inclusion layers, a fully-connected feature extraction layer and a softmax classification layer, wherein the plurality of convolution layers, the plurality of pooling layers and the plurality of inclusion layers are connected in sequence in a staggered manner, and then connected with the fully-connected feature extraction layer and the softmax classification layer in sequence.
Specifically, the convolution layer is used for performing convolution operation on the input human face feature position points and the human face image to generate a feature map; the pooling layer is used for downsampling the feature map generated by the convolutional layer to reduce the size of the feature map; the Incep layer comprises a convolution and pooling layer with a plurality of branches, is used for performing multi-branch feature extraction on an input feature map, performs convolution operation by using a small convolution kernel, further reduces the size of the feature map, performs combination calculation and outputs a two-dimensional feature map; the full-connection feature extraction layer is used for compressing the input two-dimensional feature map into a feature vector with a fixed dimension, the softmax classification layer is used for performing classification operation on the feature vector of the two-dimensional feature map and outputting the probability of the feature vector belonging to a specific category, and when the probability of the feature vector belonging to the specific category exceeds a set value, the FECNN parameter can be judged to be converged.
Step S3, obtaining the position points of the face features of the face to be recognized; comparing the position points of the face features of the face to be recognized with the position points of the face features of n face images in a face library one by one, wherein n is a natural number;
optionally, the obtaining of the position points of the face features of the face to be recognized includes:
and acquiring a feature training model, and acquiring the position points of the face features of the face to be recognized by adopting the feature training model.
The distance D2 between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library can be calculated by the following formula:
Figure BDA0003031070200000071
wherein A is1As location points of face features of the face to be recognized, A2To verify the location of the markers of the set, p is the number of human face feature points.
Further, the method for obtaining the feature training model includes:
acquiring a face image as a sample data set, and marking face feature position points of the sample data set; dividing the sample data set into a training set and a verification set;
constructing an improved MobileNet network, training the improved MobileNet network by using the sample data set, and enabling the MobileNet network to output position points of human face features;
and inputting the face images of the verification set into the trained improved MobileNet network, if the distance between the output of the improved MobileNet network and the mark of the verification set is smaller than a second preset threshold value, indicating that the improved MobileNet network passes the verification, and taking the verified improved MobileNet network as a feature training model.
The second preset threshold value can be set to be equal to the first preset threshold value, or not equal to the first preset threshold value, and can be set according to the precision required by the improved MobileNet network.
Specifically, the parameters of the main structure of the improved MobileNet network are as follows:
table: parameter table of MobileNet-V2 model
Figure BDA0003031070200000081
Specifically, the improved MobileNet network comprises:
an Expansion layer, which adopts a 1 x 1 network structure to map a low-dimensional space to a high-dimensional space;
depth separable convolution layers, namely performing convolution on different input channels respectively by adopting depthwise convolution depth convolution, then combining the outputs of the depthwise convolution depth convolution by adopting position convolution point-by-point convolution, and collecting the position of each characteristic;
and the Projection layer is used for compressing the feature data collected by the pointwise convolution point-by-point convolution.
The distance d2 between the output of the modified MobileNet network and the signature of the validation set also satisfies:
Figure BDA0003031070200000091
wherein i1To improve the position of the MobileNet network output, i2To verify the location of the markers of the set, p is the number of human face feature points.
In this embodiment, the MobileNet network parameters can be updated by gradually decreasing the loss function.
The loss function L satisfies:
Figure BDA0003031070200000092
wherein L is the loss function of the MobileNet network, M is the number of samples, N is the number of characteristic points, and lambdanThe weight value of each pose of the human face,
Figure BDA0003031070200000093
is a measure of the location distance of the feature points.
The gesture comprises a side face, a front face, a head raising part, a head lowering part, an expression and a shielding part, and is determined by the face of the yaw, pitch and roll with three angles.
And step S4, when the distance between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library is smaller than a first set threshold value, returning the character information corresponding to the face image.
Example two:
referring to fig. 2, a flow chart of a method according to another embodiment of the present invention is shown, which is different from the previous embodiment in that the method further includes:
and step S5, comparing the face characteristics of the face to be recognized with the face characteristics of a certain face image in the face library, and returning the similarity between the face characteristics of the face to be recognized and the face characteristics of the face image.
Specifically, the similarity of the human face features can be measured by an euclidean distance and a cosine distance, and when an included angle θ between two vectors tends to 0, the closer the two vectors are, the smaller the difference is. At this time, cos θ is 1, i.e., the closer to the value of 1, the more similar the face is.
Optionally, the method of the present invention further comprises:
and when the distances between the position points of the face features of the face to be recognized and the position points of the face features of all the face images in the face library are not less than a first set threshold, judging that the face recognition fails, and storing the face images captured by the current face recognition.
And judging whether the reason of face recognition failure has an environmental reason or not, wherein the environmental reason comprises that the illumination intensity of the environment where the intelligent terminal is located is lower than a preset intensity threshold value.
In this embodiment, the illumination intensity through judging the environment that intelligent terminal is located is less than preset intensity threshold value to carry out the light filling when illumination intensity is less than this intensity threshold value and handle, thereby be favorable to this intelligent terminal to catch more clear face under the scene that light is not enough, improve intelligent terminal's face identification success rate under this scene.
Example three:
the invention relates to a face acquisition feature training system, which comprises:
the system comprises a face sample acquisition module, a face database storage module and a face recognition module, wherein the face sample acquisition module is used for acquiring a large number of face images, extracting face characteristics of the large number of face images, marking face position points corresponding to the face characteristics and storing the face position points into a face database;
the face recognition module is used for carrying out face recognition; extracting the face features of the face to be recognized;
the face matching module is used for acquiring position points of face features of a face to be recognized; comparing the position points of the face features of the face to be recognized with the position points of the face features of n face images in a face library one by one, wherein n is a natural number; and when the distance between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library is smaller than a first set threshold value, returning the figure information corresponding to the face image.
Optionally, the obtaining of the position points of the face features of the face to be recognized includes: and acquiring a feature training model, and acquiring the position points of the face features of the face to be recognized by adopting the feature training model.
The method for obtaining the feature training model can comprise the following steps:
acquiring a face image as a sample data set, and marking face feature position points of the sample data set; dividing the sample data set into a training set and a verification set;
constructing an improved MobileNet network, training the improved MobileNet network by using the sample data set, and enabling the MobileNet network to output position points of human face features;
and inputting the face images of the verification set into the trained improved MobileNet network, if the distance between the output of the improved MobileNet network and the mark of the verification set is smaller than a second preset threshold value, indicating that the improved MobileNet network passes the verification, and taking the verified improved MobileNet network as a feature training model.
Example four:
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and in the present application, an electronic device 100 for implementing a face acquisition and recognition method according to an embodiment of the present application may be described by using the schematic diagram shown in fig. 3.
As shown in fig. 3, an electronic device 100 includes one or more processors 102, one or more memory devices 104, and the like, which are interconnected via a bus system and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 3 are only exemplary and not limiting, and the electronic device may have some of the components shown in fig. 3 and may have other components and structures not shown in fig. 3 as needed.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement the functions of the embodiments of the application (as implemented by the processor) described below and/or other desired functions. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The invention also provides a computer storage medium on which a computer program is stored, in which the method of the invention, if implemented in the form of software functional units and sold or used as a stand-alone product, can be stored. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer storage medium and used by a processor to implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer storage media may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer storage media that does not include electrical carrier signals and telecommunications signals as subject to legislation and patent practice.
Various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (10)

1. A face acquisition and recognition method is characterized by comprising the following steps:
acquiring a large number of face images, extracting face features of the large number of face images, marking face position points corresponding to the face features, and storing the face position points in a face library;
triggering an intelligent terminal to perform face recognition; extracting the face features of the face to be recognized;
acquiring position points of face features of a face to be recognized; comparing the position points of the face features of the face to be recognized with the position points of the face features of n face images in a face library one by one, wherein n is a natural number; and when the distance between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library is smaller than a first set threshold value, returning the figure information corresponding to the face image.
2. The face acquisition and recognition method according to claim 1, further comprising: and comparing the face characteristics of the face to be recognized with the face characteristics of a certain face image in the face library, and returning the similarity between the face characteristics of the face to be recognized and the face characteristics of the face image.
3. The face acquisition and recognition method according to claim 1, further comprising: and when the distances between the position points of the face features of the face to be recognized and the position points of the face features of all the face images in the face library are not less than a first set threshold, judging that the face recognition fails, and storing the face images captured by the current face recognition.
4. The face acquisition and recognition method according to claim 3, further comprising: and judging whether the reason of face recognition failure has an environmental reason or not, wherein the environmental reason comprises that the illumination intensity of the environment where the intelligent terminal is located is lower than a preset intensity threshold value.
5. The face collection and recognition method according to claim 1, wherein the extracting the face features of the face to be recognized comprises:
establishing a human face characteristic point shape driving depth model based on a convolutional neural network;
detecting a human face through a RetinaFace human face detection model, and generating multilayer detection frames with different sizes; the detection frames comprise target frames with different sizes; training the face characteristic point shape driving depth model in each layer of detection frame;
and carrying out face feature extraction and fusion by utilizing the trained face feature point shape driving depth model, wherein the fusion comprises the fusion of face feature position points and face postures.
6. The face collection and recognition method of claim 5, wherein the face feature point shape-driven depth model comprises a main network and an auxiliary network, and the main network is a regional convolution network.
7. The face collection and recognition method according to claim 1, wherein the extracting the face features of the face to be recognized comprises:
and establishing an FECNN parameter model, and sending the face feature position points and the face image into the FECNN parameter model for feature extraction to obtain the face features of the face to be recognized.
8. The face acquisition and recognition method of claim 7, wherein the establishing the FECNN parameter model comprises: constructing a plurality of convolution layers, a plurality of pooling layers, a plurality of inclusion layers, a fully-connected feature extraction layer and a softmax classification layer, wherein the plurality of convolution layers, the plurality of pooling layers and the plurality of inclusion layers are connected in sequence in a staggered manner, and then connected with the fully-connected feature extraction layer and the softmax classification layer in sequence.
9. A face acquisition and recognition system, comprising:
the system comprises a face sample acquisition module, a face database storage module and a face recognition module, wherein the face sample acquisition module is used for acquiring a large number of face images, extracting face characteristics of the large number of face images, marking face position points corresponding to the face characteristics and storing the face position points into a face database;
the face recognition module is used for carrying out face recognition; extracting the face features of the face to be recognized;
the face matching module is used for acquiring position points of face features of a face to be recognized; comparing the position points of the face features of the face to be recognized with the position points of the face features of n face images in a face library one by one, wherein n is a natural number; and when the distance between the position point of the face feature of the face to be recognized and the position point of the face feature of a certain face image in the face library is smaller than a first set threshold value, returning the figure information corresponding to the face image.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the face acquisition recognition method according to any one of claims 1 to 8.
CN202110429920.5A 2021-04-21 2021-04-21 Face acquisition and recognition method, system and storage medium Pending CN113420585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110429920.5A CN113420585A (en) 2021-04-21 2021-04-21 Face acquisition and recognition method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110429920.5A CN113420585A (en) 2021-04-21 2021-04-21 Face acquisition and recognition method, system and storage medium

Publications (1)

Publication Number Publication Date
CN113420585A true CN113420585A (en) 2021-09-21

Family

ID=77711864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110429920.5A Pending CN113420585A (en) 2021-04-21 2021-04-21 Face acquisition and recognition method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113420585A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same
CN106557743A (en) * 2016-10-26 2017-04-05 桂林电子科技大学 A kind of face characteristic extraction system and method based on FECNN
CN109117801A (en) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of recognition of face
CN111597872A (en) * 2020-03-27 2020-08-28 北京梦天门科技股份有限公司 Health supervision law enforcement illegal medical practice face recognition method based on deep learning
CN112232117A (en) * 2020-09-08 2021-01-15 深圳微步信息股份有限公司 Face recognition method, face recognition device and storage medium
CN112613385A (en) * 2020-12-18 2021-04-06 成都三零凯天通信实业有限公司 Face recognition method based on monitoring video

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model
CN105243374A (en) * 2015-11-02 2016-01-13 湖南拓视觉信息技术有限公司 Three-dimensional human face recognition method and system, and data processing device applying same
CN106557743A (en) * 2016-10-26 2017-04-05 桂林电子科技大学 A kind of face characteristic extraction system and method based on FECNN
CN109117801A (en) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of recognition of face
CN111597872A (en) * 2020-03-27 2020-08-28 北京梦天门科技股份有限公司 Health supervision law enforcement illegal medical practice face recognition method based on deep learning
CN112232117A (en) * 2020-09-08 2021-01-15 深圳微步信息股份有限公司 Face recognition method, face recognition device and storage medium
CN112613385A (en) * 2020-12-18 2021-04-06 成都三零凯天通信实业有限公司 Face recognition method based on monitoring video

Similar Documents

Publication Publication Date Title
Peter et al. Improving ATM security via face recognition
KR101174048B1 (en) Apparatus for recognizing a subject and method using thereof
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
Mshir et al. Signature recognition using machine learning
Angadi et al. Face recognition through symbolic modeling of face graphs and texture
Praseetha et al. Secure fingerprint authentication using deep learning and minutiae verification
Pranoto et al. Real-time triplet loss embedding face recognition for authentication student attendance records system framework
Goud et al. Smart attendance notification system using SMTP with face recognition
Tripathi et al. A robust approach based on local feature extraction for age invariant face recognition
Agarwal et al. Human identification and verification based on signature, fingerprint and iris integration
Kawulok et al. Supervised relevance maps for increasing the distinctiveness of facial images
CN113420585A (en) Face acquisition and recognition method, system and storage medium
Nilchiyan et al. Statistical on-line signature verification using rotation-invariant dynamic descriptors
CN111428670B (en) Face detection method, face detection device, storage medium and equipment
Srivastava et al. Three-layer multimodal biometric fusion using SIFT and SURF descriptors for improved accuracy of authentication of human identity
Huang Robust face recognition based on three dimensional data
Santosh et al. Recent Trends in Image Processing and Pattern Recognition: Third International Conference, RTIP2R 2020, Aurangabad, India, January 3–4, 2020, Revised Selected Papers, Part I
Mittal et al. Offline Signature verification: A Systematic Review
Ding Combining 2D facial texture and 3D face morphology for estimating people's soft biometrics and recognizing facial expressions
Sun et al. Texture-guided multiscale feature learning network for palmprint image quality assessment
ERCAN et al. A Face Authentication System Using Landmark Detection
Paul et al. Face recognition using facial features
Mahmood Face detection by image discriminating
Devi et al. Deep Learning for Iris Recognition: An Integration of Feature Extraction and Clustering
Seemanthini et al. Facial recognition for automated attendance system using ADA boost algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination