CN113239885A - Face detection and recognition method and system - Google Patents
Face detection and recognition method and system Download PDFInfo
- Publication number
- CN113239885A CN113239885A CN202110626119.XA CN202110626119A CN113239885A CN 113239885 A CN113239885 A CN 113239885A CN 202110626119 A CN202110626119 A CN 202110626119A CN 113239885 A CN113239885 A CN 113239885A
- Authority
- CN
- China
- Prior art keywords
- face
- central point
- recognition
- diagram
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 21
- 239000013598 vector Substances 0.000 claims abstract description 66
- 238000012549 training Methods 0.000 claims abstract description 31
- 238000013135 deep learning Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 238000010586 diagram Methods 0.000 claims description 78
- 230000004044 response Effects 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000001629 suppression Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 5
- 241000287196 Asthenes Species 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face detection and identification method, which comprises the following steps: s1: preprocessing the image marked with the face frame to generate a training sample; s2: constructing a face detection and recognition network, wherein the face detection and recognition network adopts a deep learning network and fuses network high-level features and low-level features; s3: inputting training samples into the constructed face detection and recognition network for training until the training loss value is smaller than a preset threshold value, and obtaining a deep learning network capable of outputting face detection and face recognition results; according to the invention, a face detection and recognition network is designed, the face detection is regarded as the face central point problem, the face central point detection and the face feature vector extraction are combined for learning, a face frame is obtained, a face feature vector corresponding to the face frame can be obtained, then the face feature vector comparison is carried out to obtain a face recognition result, and therefore the network outputs the face detection and face recognition results.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to a face detection and recognition method and system.
Background
With the development and progress of science and technology, face recognition has very wide application in crime fighting, fraud prevention, public safety guarantee, wide improvement of customer experience of various industries and the like. Such as identifying criminal suspects, finding lost children, intelligent stores, face payments, etc. Face recognition is a process of recognizing or verifying the identity of a person using facial information, and generally comprises three steps, step 1: face detection, which is an indispensable step in that it can detect and locate faces in images or videos; step 2: aligning the detected human faces, and converting one human face into a string of vectors by utilizing a human face feature extraction technology; and step 3: and calculating face similarity of the obtained feature vectors to judge whether the two faces belong to the same person.
The existing face recognition method has the defect of consuming time because the three steps are executed in a serial connection mode, the two functions are realized by a face detection network and a face recognition network which are designed in a distributed mode, in the face recognition process, the time for extracting the feature vectors is in direct proportion to the number of detected face frames, and the more the number of faces is, the more the time for extracting the feature vectors is, and therefore, the more the face recognition method in the mode is.
Disclosure of Invention
In order to solve the defects in the prior art, the invention designs a face recognition method which can simultaneously carry out two tasks of face detection and face recognition, thereby improving the efficiency of face detection and recognition and saving computer resources.
The technical scheme of the invention is as follows:
a face detection and recognition method is characterized by comprising the following steps: the method comprises the following steps:
s1: preprocessing the image marked with the face frame to generate a training sample;
s2: constructing a face detection and recognition network, wherein the face detection and recognition network adopts a deep learning network and fuses network high-level features and low-level features;
s3: inputting training samples into the constructed face detection and recognition network for training until the training loss value is smaller than a preset threshold value, and obtaining a deep learning network capable of outputting face detection and face recognition results;
the deep learning network for face detection comprises the following steps:
s31: generating a face central point thermodynamic diagram, a face central point offset diagram and a face width and height diagram;
s32: executing a non-maximum value suppression algorithm on the face central point thermodynamic diagram, extracting peak value points, respectively calculating thermodynamic response values, selecting points with the thermodynamic response values larger than a threshold value as candidate face central points, extracting face central point offset values at corresponding positions of the face central point offset map, adding to obtain face central point positions, and finally extracting face width and height values at corresponding positions of the face width and height map to generate a face frame;
the deep learning network for face recognition comprises the following steps:
s33: when step S31 is executed, image feature vectors of the entire image are extracted at the same time;
s34: selecting a feature vector corresponding to the position of the face frame from image feature vectors as a face feature vector, and matching the face feature vector with each face feature vector stored in a database to obtain a face recognition result;
the training loss value is formed by superposing face central point thermodynamic diagram loss, face central point offset diagram loss, face width and height diagram loss and face recognition loss.
Preferably, let the ith individual face frame on the image be represented by two points at the top left and bottom right of the frameThe face center point of the face frameIs shown asOrder toIs shown asAt the center point of the faceAnd if the corresponding position on the thermodynamic diagram corresponds to the generated human face central point thermodynamic diagram, the response value of the corresponding generated human face central point thermodynamic diagram is represented as:
where N represents the number of face frames on the image, σcExpressing the standard deviation of the Gaussian function;
the loss of the face center point thermodynamic diagram is represented as:
wherein, alpha and beta are modulation coefficients;and representing the heat value of the center point of the face obtained by network prediction.
Preferably, let the ith individual face frame on the image be represented by its two upper left and lower right points as:let its width and height be expressed as:the face width height loss is expressed as:
wherein,representing the width and height positions of the human face obtained by network prediction;
let the face center point of the ith face frame on the image be represented asOrder toThe corresponding position on the face central point thermodynamic diagram is represented asLet the offset of the center point of the face be expressed asThen the face center point offset loss is expressed as:
Preferably, the target central point of the ith face frame on the face central point thermodynamic diagram on the image is set asExtracting the corresponding characteristic vector on the image characteristic vector diagramMaps it to a class distribution vector pi(k) L for corresponding label class labeli(k) And if so, the face recognition loss is expressed as:
wherein N is the number of face frames, and K is the number of categories; p is a radical ofi(k) Is the probability that the ith face box belongs to the kth id, Li(k) And labeling the ith face frame.
Preferably, the face detection and recognition network uses resnet34 or Googlenet as a backbone network.
A face detection and recognition system comprising:
the image preprocessing module is used for preprocessing the image marked with the face frame to generate a training sample;
the human face feature extraction module is used for generating a human face central point thermodynamic diagram, a human face central point offset diagram and a human face width and height diagram and extracting an image feature vector of the whole image;
the training loss calculation module is used for calculating the thermodynamic diagram loss of the face center point, the offset diagram loss of the face center point, the width and height diagram loss of the face and the face recognition loss, performing superposition calculation, finishing training when the training loss value is smaller than a preset threshold value, and obtaining a deep learning network capable of outputting the face detection and face recognition results;
the human face detection module is used for executing a non-maximum value suppression algorithm on a human face central point thermodynamic diagram, calculating thermal response values of the human face central point thermodynamic diagram after extracting peak values, selecting points with the thermal response values larger than a threshold value as candidate human face central points, extracting human face central point offset values at corresponding positions of the human face central point offset quantity diagram, adding the human face central point offset values to obtain human face central point positions, and finally extracting human face width and height values at corresponding positions of the human face width and height diagram to generate a human face frame;
and the face recognition module is used for selecting the feature vector corresponding to the face frame position from the image feature vectors as a face feature vector, and matching the face feature vector with each face feature vector stored in the database to obtain a face recognition result.
By adopting the technical scheme, compared with the prior art, the invention has the following beneficial effects:
according to the invention, a face detection and recognition network is designed, the face detection is regarded as the face central point problem, the face central point detection and the face feature vector extraction are combined for learning, a face frame is obtained, a face feature vector corresponding to the face frame can be obtained, then the face feature vector comparison is carried out to obtain a face recognition result, and therefore the network outputs the face detection and face recognition results. The face detection and the face recognition share one network, so that the inference time is reduced, the forward time is irrelevant to the number of faces in the picture to be detected, the face recognition efficiency is improved, and moreover, the multi-task learning can supervise the learning mutually, and the network performance is favorably improved; on the other hand, the problem that the face detection is regarded as the face central point is solved, and the technical difficulty that ambiguity is easily caused when a plurality of faces are in charge of identity information of the same face in the prior art is overcome.
Drawings
FIG. 1 is a flow chart of a face detection and recognition method of the present invention;
FIG. 2 is a flowchart of the overall operation of the face detection and recognition method of the present invention;
fig. 3 is a diagram of a face detection and recognition network according to the present invention.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, the face detection and recognition method of the present invention includes the following steps:
s1: preprocessing the image marked with the face frame to generate a training sample;
s2: constructing a face detection and recognition network, wherein the face detection and recognition network adopts a deep learning network and fuses network high-level features and low-level features;
referring to fig. 3, in the embodiment, a resnet34 network is used as a backbone network in the face detection and recognition network, and the network fuses high-level and low-level features of the network through multiple hopping connections, so that the features are more robust;
s3: inputting training samples into the constructed face detection and recognition network for training until the training loss value is smaller than a preset threshold value, and obtaining a deep learning network capable of outputting face detection and face recognition results;
referring to fig. 1 and 2, the performing of the face detection by the deep learning network includes the following steps:
s31: generating a face central point thermodynamic diagram, a face central point offset diagram and a face width and height diagram;
in the embodiment, the step length is set to be 4, C × H × W images are input, C represents the number of channels, H and W respectively represent the height and width of the images, and after the images pass through a resnet34 network, a characteristic diagram with the shape of C × H/4 × W/4 is finally obtained;
referring to fig. 2 and fig. 3, the network includes a part for implementing the face detection function, the first branch is used for predicting the face central point thermodynamic diagram and is composed of a convolution of 256 × 3 × 3 and a convolution of 1 × 1 × 1, and finally, a thermodynamic diagram of 1 × H/4 × W/4 is obtainedThe second branch is used for predicting the offset of the center point of the face and consists of a convolution of 256 multiplied by 3 and a convolution of 2 multiplied by 1, and finally the offset prediction result of 2 multiplied by H/4 multiplied by W/4 is obtainedThe 3 rd branch is used for predicting the width and height of the face and consists of a convolution of 256 multiplied by 3 and a convolution of 2 multiplied by 1, and finally, a 2 multiplied by H/4 multiplied by W/4 face width and height prediction result is obtained
S32: executing a non-maximum value suppression algorithm on the face central point thermodynamic diagram, extracting peak value points, respectively calculating thermodynamic response values, selecting points with the thermodynamic response values larger than a threshold value as candidate face central points, extracting face central point offset values at corresponding positions of the face central point offset map, adding to obtain face central point positions, and finally extracting face width and height values at corresponding positions of the face width and height map to generate a face frame;
referring to fig. 2 and 3, the performing of the face recognition by the deep learning network includes the following steps:
s33: when step S31 is executed, image feature vectors of the entire image are extracted at the same time;
s34: selecting a feature vector corresponding to the position of the face frame from image feature vectors as a face feature vector, and matching the face feature vector with each face feature vector stored in a database to obtain a face recognition result;
in the embodiment, the network consists of a convolution of 256 × 3 × 3, a convolution of 128 × 1 × 1 (used for obtaining the face feature vector), and a convolution layer of K × 1 × 1, where K represents the face ID number, i.e., the number of classified categories, and the corresponding face feature vector is used as the identification of the face ID.
Referring to fig. 2, in an embodiment of the present invention, a face detection and recognition method includes the following steps:
making a feature vector database:
taking the face image of each id in the database as network input, and extracting corresponding characteristic vectorsAs an identifier of each id, a database face feature vector set E ═ E is obtainedj|j=1,…,K}。
Face detection and feature extraction:
using 3 × 960 × 720 images as input, a face center thermodynamic diagram of 1 × 0240 × 1180, a face center displacement diagram of 2 × 240 × 180, a width and height diagram of 2 × 240 × 180 faces, and an image feature vector of 128 × 240 × 180 are obtained. Executing a non-maximum value suppression algorithm on the 1 x 240 x 180 face central point thermodynamic diagram, extracting a peak face central point, and obtaining a thermal response value larger than T1N candidate face center pointsThen, the offset of the center point of the corresponding face is takenAnd width and height of human faceObtaining a face frame after calculationTaking the image feature vector set corresponding to the face frame from the image feature vectors of 128 × 240 × 180I.e. the face feature vector.
Face matching:
collecting face characteristic vectorAnd the face feature vector set E ═ { E in the databasejComparing if 1 | j ═ 1.. K |, ifAnd in databasesHighest similarity value, and the similarity value is greater than threshold value T2Then it is considered asAndcorresponding to the same person.
In this example, T is taken1=0.8,T2=0.6。
In the embodiment of the invention, the training loss value is formed by superposing face central point thermodynamic diagram loss, face central point offset diagram loss, face width and height diagram loss and face identification loss.
Further, let the ith individual face frame on the image be represented by two points at the top left and bottom right of the frameThe face center point of the face frameIs shown asOrder toIs shown asAnd at the corresponding position on the face central point thermodynamic diagram, the response value of the corresponding generated face central point thermodynamic diagram is represented as:
where N represents the number of face frames on the image, σcExpressing the standard deviation of the Gaussian function;
the loss of the face center point thermodynamic diagram is represented as:
wherein α and β are modulation coefficients, and in this embodiment, are set to 1 and 2, respectively;and representing the heat value of the center point of the face obtained by network prediction.
Further, let the ith individual face frame on the image be represented by its two upper left and lower right points as:let its width and height be expressed as:the face width height loss is expressed as:
wherein,representing the width and height positions of the human face obtained by network prediction;
in this embodiment, the face center point of the ith face frame on the image is expressed asOrder toThe corresponding position on the face central point thermodynamic diagram is represented asLet the offset of the center point of the face be expressed asThen the face center point offset loss is expressed as:
In this embodiment, n is 4, and for the label boxIts width and heightCorresponding center point offset is
Further, the target central point of the ith face frame on the face central point thermodynamic diagram on the image is set asExtracting the corresponding characteristic vector on the image characteristic vector diagramMaps it to a class distribution vector pi(k) L for corresponding label class labeli(k) And if so, the face recognition loss is expressed as:
wherein N is the number of face frames, and K is the number of categories; p is a radical ofi(k) Is the probability that the ith face box belongs to the kth id, Li(k) And labeling the ith face frame.
In this embodiment, the K value is 10000, and when the ith face frame belongs to the 1 st id, L isi(k) (1, 0, 0, 0,. 0, 0, 0), 9999 of which are 0.
In the embodiment provided by the invention, the face detection and recognition network adopts the resnet34 or Googlenet as a backbone network.
The invention also provides a face detection and recognition system, comprising:
the image preprocessing module is used for preprocessing the image marked with the face frame to generate a training sample;
the human face feature extraction module is used for generating a human face central point thermodynamic diagram, a human face central point offset diagram and a human face width and height diagram and extracting an image feature vector of the whole image;
the training loss calculation module is used for calculating the thermodynamic diagram loss of the face center point, the offset diagram loss of the face center point, the width and height diagram loss of the face and the face recognition loss, performing superposition calculation, finishing training when the training loss value is smaller than a preset threshold value, and obtaining a deep learning network capable of outputting the face detection and face recognition results;
the human face detection module is used for executing a non-maximum value suppression algorithm on a human face central point thermodynamic diagram, calculating thermal response values of the human face central point thermodynamic diagram after extracting peak values, selecting points with the thermal response values larger than a threshold value as candidate human face central points, extracting human face central point offset values at corresponding positions of the human face central point offset quantity diagram, adding the human face central point offset values to obtain human face central point positions, and finally extracting human face width and height values at corresponding positions of the human face width and height diagram to generate a human face frame;
and the face recognition module is used for selecting the feature vector corresponding to the face frame position from the image feature vectors as a face feature vector, and matching the face feature vector with each face feature vector stored in the database to obtain a face recognition result.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, and the scope of protection is still within the scope of the invention.
Claims (6)
1. A face detection and recognition method is characterized by comprising the following steps: the method comprises the following steps:
s1: preprocessing the image marked with the face frame to generate a training sample;
s2: constructing a face detection and recognition network, wherein the face detection and recognition network adopts a deep learning network and fuses network high-level features and low-level features;
s3: inputting training samples into the constructed face detection and recognition network for training until the training loss value is smaller than a preset threshold value, and obtaining a deep learning network capable of outputting face detection and face recognition results;
the deep learning network for face detection comprises the following steps:
s31: generating a face central point thermodynamic diagram, a face central point offset diagram and a face width and height diagram;
s32: executing a non-maximum value suppression algorithm on the face central point thermodynamic diagram, extracting peak value points, respectively calculating thermodynamic response values, selecting points with the thermodynamic response values larger than a threshold value as candidate face central points, extracting face central point offset values at corresponding positions of the face central point offset map, adding to obtain face central point positions, and finally extracting face width and height values at corresponding positions of the face width and height map to generate a face frame;
the deep learning network for face recognition comprises the following steps:
s33: when step S31 is executed, image feature vectors of the entire image are extracted at the same time;
s34: selecting a feature vector corresponding to the position of the face frame from image feature vectors as a face feature vector, and matching the face feature vector with each face feature vector stored in a database to obtain a face recognition result;
the training loss value is formed by superposing face central point thermodynamic diagram loss, face central point offset diagram loss, face width and height diagram loss and face recognition loss.
2. A face detection and recognition method as claimed in claim 1, wherein:
let the ith personal face frame on the image be represented by two points, upper left and lower right, asThe face center point of the face frameIs shown asOrder toIs shown asAnd at the corresponding position on the face central point thermodynamic diagram, the response value of the corresponding generated face central point thermodynamic diagram is represented as:
where N represents the number of face frames on the image, σcExpressing the standard deviation of the Gaussian function;
the loss of the face center point thermodynamic diagram is represented as:
3. A face detection and recognition method as claimed in claim 2, wherein:
let the ith personal face frame on the image be represented by its two upper left and lower right points as:let its width and height be expressed as:the face width height loss is expressed as:
wherein,representing the width and height positions of the human face obtained by network prediction;
let the face center point of the ith face frame on the image be represented asOrder toThe corresponding position on the face central point thermodynamic diagram is represented asLet the offset of the center point of the face be expressed asThen the face center point offset loss is expressed as:
4. A face detection and recognition method as claimed in claim 3, wherein:
the target central point of the ith face frame on the face central point thermodynamic diagram on the image is set asExtracting the corresponding characteristic vector on the image characteristic vector diagramMaps it to a class distribution vector pi(k) L for corresponding label class labeli(k) And if so, the face recognition loss is expressed as:
wherein N is the number of face frames, and K is the number of categories; p is a radical ofi(k) Is the probability that the ith face box belongs to the kth id, Li(k) Is the label of the ith detection frame.
5. A face detection and recognition method as claimed in any one of claims 1 to 4, wherein: the face detection and recognition network adopts resnet34 or Googlenet as a backbone network.
6. A face detection and recognition system, comprising:
the image preprocessing module is used for preprocessing the image marked with the face frame to generate a training sample; the human face feature extraction module is used for generating a human face central point thermodynamic diagram, a human face central point offset diagram and a human face width and height diagram and extracting an image feature vector of the whole image;
the training loss calculation module is used for calculating the thermodynamic diagram loss of the face center point, the offset diagram loss of the face center point, the width and height diagram loss of the face and the face recognition loss, performing superposition calculation, finishing training when the training loss value is smaller than a preset threshold value, and obtaining a deep learning network capable of outputting the face detection and face recognition results;
the human face detection module is used for executing a non-maximum value suppression algorithm on a human face central point thermodynamic diagram, calculating thermal response values of the human face central point thermodynamic diagram after extracting peak values, selecting points with the thermal response values larger than a threshold value as candidate human face central points, extracting human face central point offset values at corresponding positions of the human face central point offset quantity diagram, adding the human face central point offset values to obtain human face central point positions, and finally extracting human face width and height values at corresponding positions of the human face width and height diagram to generate a human face frame;
and the face recognition module is used for selecting the feature vector corresponding to the face frame position from the image feature vectors as a face feature vector, and matching the face feature vector with each face feature vector stored in the database to obtain a face recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110626119.XA CN113239885A (en) | 2021-06-04 | 2021-06-04 | Face detection and recognition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110626119.XA CN113239885A (en) | 2021-06-04 | 2021-06-04 | Face detection and recognition method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113239885A true CN113239885A (en) | 2021-08-10 |
Family
ID=77136745
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110626119.XA Pending CN113239885A (en) | 2021-06-04 | 2021-06-04 | Face detection and recognition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113239885A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114462495A (en) * | 2021-12-30 | 2022-05-10 | 浙江大华技术股份有限公司 | Training method of face shielding detection model and related device |
TWI779815B (en) * | 2021-09-03 | 2022-10-01 | 瑞昱半導體股份有限公司 | Face recognition network model with face alignment based on knowledge distillation |
CN115565207A (en) * | 2022-11-29 | 2023-01-03 | 武汉图科智能科技有限公司 | Occlusion scene downlink person detection method with feature simulation fused |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106503669A (en) * | 2016-11-02 | 2017-03-15 | 重庆中科云丛科技有限公司 | A kind of based on the training of multitask deep learning network, recognition methods and system |
CN108268822A (en) * | 2016-12-30 | 2018-07-10 | 深圳光启合众科技有限公司 | Face identification method, device and robot |
CN109086660A (en) * | 2018-06-14 | 2018-12-25 | 深圳市博威创盛科技有限公司 | Training method, equipment and the storage medium of multi-task learning depth network |
CN109919097A (en) * | 2019-03-08 | 2019-06-21 | 中国科学院自动化研究所 | Face and key point combined detection system, method based on multi-task learning |
WO2019128646A1 (en) * | 2017-12-28 | 2019-07-04 | 深圳励飞科技有限公司 | Face detection method, method and device for training parameters of convolutional neural network, and medium |
CN110705357A (en) * | 2019-09-02 | 2020-01-17 | 深圳中兴网信科技有限公司 | Face recognition method and face recognition device |
CN111160108A (en) * | 2019-12-06 | 2020-05-15 | 华侨大学 | Anchor-free face detection method and system |
WO2020102988A1 (en) * | 2018-11-20 | 2020-05-28 | 西安电子科技大学 | Feature fusion and dense connection based infrared plane target detection method |
CN112270310A (en) * | 2020-11-24 | 2021-01-26 | 上海工程技术大学 | Cross-camera pedestrian multi-target tracking method and device based on deep learning |
CN112749626A (en) * | 2020-12-10 | 2021-05-04 | 同济大学 | DSP platform-oriented rapid face detection and recognition method |
-
2021
- 2021-06-04 CN CN202110626119.XA patent/CN113239885A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106503669A (en) * | 2016-11-02 | 2017-03-15 | 重庆中科云丛科技有限公司 | A kind of based on the training of multitask deep learning network, recognition methods and system |
CN108268822A (en) * | 2016-12-30 | 2018-07-10 | 深圳光启合众科技有限公司 | Face identification method, device and robot |
WO2019128646A1 (en) * | 2017-12-28 | 2019-07-04 | 深圳励飞科技有限公司 | Face detection method, method and device for training parameters of convolutional neural network, and medium |
CN109086660A (en) * | 2018-06-14 | 2018-12-25 | 深圳市博威创盛科技有限公司 | Training method, equipment and the storage medium of multi-task learning depth network |
WO2020102988A1 (en) * | 2018-11-20 | 2020-05-28 | 西安电子科技大学 | Feature fusion and dense connection based infrared plane target detection method |
CN109919097A (en) * | 2019-03-08 | 2019-06-21 | 中国科学院自动化研究所 | Face and key point combined detection system, method based on multi-task learning |
CN110705357A (en) * | 2019-09-02 | 2020-01-17 | 深圳中兴网信科技有限公司 | Face recognition method and face recognition device |
CN111160108A (en) * | 2019-12-06 | 2020-05-15 | 华侨大学 | Anchor-free face detection method and system |
CN112270310A (en) * | 2020-11-24 | 2021-01-26 | 上海工程技术大学 | Cross-camera pedestrian multi-target tracking method and device based on deep learning |
CN112749626A (en) * | 2020-12-10 | 2021-05-04 | 同济大学 | DSP platform-oriented rapid face detection and recognition method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI779815B (en) * | 2021-09-03 | 2022-10-01 | 瑞昱半導體股份有限公司 | Face recognition network model with face alignment based on knowledge distillation |
US11847821B2 (en) | 2021-09-03 | 2023-12-19 | Realtek Semiconductor Corp. | Face recognition network model with face alignment based on knowledge distillation |
CN114462495A (en) * | 2021-12-30 | 2022-05-10 | 浙江大华技术股份有限公司 | Training method of face shielding detection model and related device |
CN115565207A (en) * | 2022-11-29 | 2023-01-03 | 武汉图科智能科技有限公司 | Occlusion scene downlink person detection method with feature simulation fused |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126360B (en) | Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model | |
Ye et al. | PurifyNet: A robust person re-identification model with noisy labels | |
CN111259786B (en) | Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video | |
CN113239885A (en) | Face detection and recognition method and system | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
CN110751018A (en) | Group pedestrian re-identification method based on mixed attention mechanism | |
CN109214263A (en) | A kind of face identification method based on feature multiplexing | |
CN110765841A (en) | Group pedestrian re-identification system and terminal based on mixed attention mechanism | |
CN111274922A (en) | Pedestrian re-identification method and system based on multi-level deep learning network | |
WO2019167784A1 (en) | Position specifying device, position specifying method, and computer program | |
Xia et al. | Face occlusion detection using deep convolutional neural networks | |
CN112036379A (en) | Skeleton action identification method based on attention time pooling graph convolution | |
CN113379771A (en) | Hierarchical human body analytic semantic segmentation method with edge constraint | |
CN112364791A (en) | Pedestrian re-identification method and system based on generation of confrontation network | |
CN115984901A (en) | Multi-mode-based graph convolution neural network pedestrian re-identification method | |
CN112906520A (en) | Gesture coding-based action recognition method and device | |
CN112613474B (en) | Pedestrian re-identification method and device | |
CN114387496A (en) | Target detection method and electronic equipment | |
Tran et al. | DynGeoNet: fusion network for micro-expression spotting | |
CN114120076B (en) | Cross-view video gait recognition method based on gait motion estimation | |
Nguyen et al. | Real-time Human Detection under Omni-dir ectional Camera based on CNN with Unified Detection and AGMM for Visual Surveillance | |
Galiyawala et al. | Dsa-pr: discrete soft biometric attribute-based person retrieval in surveillance videos | |
Jiashu | Performance analysis of facial recognition: A critical review through glass factor | |
Han et al. | Hyperbolic face anti-spoofing | |
Sun et al. | FastPR: One-stage Semantic Person Retrieval via Self-supervised Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210810 |
|
WD01 | Invention patent application deemed withdrawn after publication |