CN116229556A - Face recognition method and device, embedded equipment and computer readable storage medium - Google Patents
Face recognition method and device, embedded equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN116229556A CN116229556A CN202310265736.0A CN202310265736A CN116229556A CN 116229556 A CN116229556 A CN 116229556A CN 202310265736 A CN202310265736 A CN 202310265736A CN 116229556 A CN116229556 A CN 116229556A
- Authority
- CN
- China
- Prior art keywords
- face
- feature vector
- preset
- similarity
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 239000013598 vector Substances 0.000 claims abstract description 228
- 238000012216 screening Methods 0.000 claims abstract description 29
- 238000001514 detection method Methods 0.000 claims description 41
- 238000000605 extraction Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 12
- 230000006870 function Effects 0.000 description 8
- 239000000284 extract Substances 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- -1 falling Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a face recognition method, a face recognition device, an embedded device and a nonvolatile computer readable storage medium. The face recognition method comprises the following steps: detecting face information in the acquired image to generate a face image; detecting the quality of the face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition. According to the face recognition method, the face recognition device, the embedded equipment and the nonvolatile computer readable storage medium, some face images with poor quality can be removed, so that the number of face images needing face recognition is reduced, and the recognition efficiency is improved. And the reconfigurable computing unit can reduce the resource occupancy rate and time consumption of the embedded device in the face recognition process so as to meet the operation requirements of the embedded device on other works.
Description
Technical Field
The present application relates to the field of face recognition technology, and more particularly, to a face recognition method, a face recognition device, an embedded device, and a non-volatile computer readable storage medium.
Background
In recent years, with the rapid improvement of computer performance and the continuous improvement of deep learning methods, the fields of pattern recognition and artificial intelligence are all made a great breakthrough. People obtain excellent effects on a plurality of pattern recognition tasks through a deep learning method, and face recognition is not exceptional. However, in the face recognition process, the face features are required to be extracted first and searched in a face feature library, but the face recognition efficiency is low due to the fact that the data volume of the face features is too large.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition device, embedded equipment and a nonvolatile computer readable storage medium.
The face recognition method comprises the steps of detecting face information in an acquired image to generate a face image; detecting the quality of the face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition.
The face recognition device comprises a generation module, a first detection module and a recognition module. The generation module is used for detecting face information in the acquired image so as to generate a face image. The first detection module is used for detecting the quality of the face image based on a preset face screening model. The recognition module is used for calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculation unit so as to carry out face recognition.
The embedded device of the embodiment of the application comprises a processor. The processor is used for detecting face information in the acquired image to generate a face image; detecting the quality of the face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition.
The non-transitory computer readable storage medium of the embodiments of the present application contains a computer program, which when executed by one or more processors, causes the processors to perform a face recognition method of: detecting face information in the acquired image to generate a face image; detecting the quality of the face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition.
In the face recognition method, the face recognition device, the embedded equipment and the nonvolatile computer readable storage medium, before face recognition is carried out on the face images, the quality of the face images is detected based on a preset face screening model, only if the quality of the face images reaches a preset condition, the face recognition is carried out based on a reconfigurable computing unit, namely, before the face images are subjected to face recognition, the face images with poor quality can be removed, so that the number of the face images needing to be subjected to face recognition is reduced, the recognition efficiency is improved, the workload of a processor can be reduced by the reconfigurable computing unit, and therefore the resource occupation rate and time consumption of the embedded equipment in the face recognition process are reduced, and the running requirements of the embedded equipment on other works are met.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a flow chart of a face recognition method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of a face recognition device of some embodiments of the present application;
FIG. 3 is a schematic plan view of an embedded device of some embodiments of the present application;
fig. 4 is a flow chart of a face recognition method according to some embodiments of the present application;
fig. 5 is a flow chart of a face recognition method according to some embodiments of the present application;
fig. 6 is a flow chart of a face recognition method according to some embodiments of the present application;
fig. 7 is a schematic view of a scenario of a face recognition method according to some embodiments of the present application;
fig. 8 is a flow chart of a face recognition method according to some embodiments of the present application;
fig. 9 is a flow chart of a face recognition method according to some embodiments of the present application;
fig. 10 is a flow chart of a face recognition method according to some embodiments of the present application;
FIG. 11 is a flow chart of a face recognition method of some embodiments of the present application;
fig. 12 is a flow chart of a face recognition method according to some embodiments of the present application;
fig. 13 is a flow chart of a face recognition method according to some embodiments of the present application;
fig. 14 is a schematic diagram of a connection state of a non-volatile computer readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, an embodiment of the present application provides a face recognition method. The face recognition method comprises the following steps:
01: detecting face information in the acquired image to generate a face image;
03: detecting the quality of a face image based on a preset face screening model; and
05: and based on the reconfigurable computing unit, computing the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector so as to conduct face recognition.
Referring to fig. 2, an embodiment of the present application provides a face recognition device 10. The face recognition device 10 comprises a generation module 11, a first detection module 12 and a recognition module 13. The face recognition method of the embodiment of the present application is applicable to the face recognition device 10. Wherein the generating module 11, the first detecting module 12 and the identifying module 13 are used for executing the steps 01, 03 and 05 respectively. That is, the generating module 11 is configured to detect face information in the acquired image to generate a face image. The first detection module 12 is configured to detect quality of a face image based on a preset face screening model. The recognition module 13 is configured to calculate, based on the reconfigurable computing unit, a similarity between a face feature vector of a face image with a quality reaching a preset condition and a preset face feature vector, so as to perform face recognition.
Referring to fig. 3, the embodiment of the present application further provides an embedded device 100. The face recognition method of the embodiment of the present application may be applied to the embedded device 100. The embedded appliance 100 includes a processor 20. The processor 20 is configured to perform step 01, step 03 and step 05. That is, the processor 20 is configured to detect face information in the captured image to generate a face image; detecting the quality of a face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition.
The embedded device 100 further includes a housing 30 and a camera 40. The housing 30 may be used to mount functional modules such as a display device, an imaging device, a power supply device, a communication device, etc. of the embedded appliance 100, so that the housing 30 provides protection against dust, falling, water, etc. for the functional modules. The camera 40 may be used to capture images. The embedded device 100 may be a cell phone, a digital camera, a smart watch, a head-mounted device, a game console, a robot, etc. As shown in fig. 3, the embodiment of the present application is described taking the example that the embedded device 100 is a mobile phone, and it is understood that the specific form of the embedded device 100 is not limited to the mobile phone.
Specifically, after the camera 40 captures an image, the processor 20 may detect face information in the captured image to generate a face image. The face information may include the position of the face in the image and the key point positions of the face (such as the positions of eyes, mouth, nose, ears, etc.). The processor 20 may employ a face detection algorithm to obtain face information in the image. The face detection algorithm may be FaceBoxes face detection network, retinaface face detection network. In this way, the processor 20 may generate a face image in the image according to the obtained face information.
The processor 20 may then detect the quality of the face image according to a predetermined face screening model. The face screening model may adopt a 16-layer convolutional neural network, and may include a convolutional layer, an activation layer, a normalization network (BN) layer, and a loss function layer. The loss function layer is a loss function of robust regression (Huber regression loss function).
The preset face screening model is a screening model trained in advance, and the training samples in the face screening model can comprise face samples with higher quality and face samples with lower quality.
More specifically, the face sample with higher quality is a sample with stable and easily identified face feature points, and the face sample with lower quality is a sample with difficult identification of the face feature points. Therefore, when the processor 20 detects the quality of the face image through the preset face screening model, the face image can be input to the preset face screening model, and the processor 20 can obtain the quality of the face image.
For example, when feature points of the face image are not easy to identify, the preset face screening model outputs a lower quality face image. For another example, when feature points of the face image are easy to identify, the preset face screening model outputs a higher quality face image.
In some embodiments, the preset face screening model may also set training samples with different qualities to correspond to different scores, that is, the preset face screening model may score images corresponding to the identified feature points with different numbers according to the number of the identified feature points. It can be appreciated that the more feature points are identified, the higher the corresponding face image score.
In this way, when the quality of the face image reaches the preset condition, the processor 20 performs face recognition based on the face image.
In one embodiment, the preset conditions may be classified into high quality and low quality, so that the processor 20 performs face recognition based on the face image when the quality of the face image is high quality.
In another embodiment, the preset condition may be a specific score value, such as 80 points, 90 points, etc. of the percentile. When the processor 20 detects the quality score of the face image through the preset face screening model, the quality score of the face image can be compared with the preset condition, so that the processor 20 can recognize based on the face image when the quality score of the face image is greater than or equal to the preset condition score.
It will be appreciated that the processor 20 will perform face recognition based on the face image only if the quality reaches a preset condition. Namely, the face images with lower quality cannot be subjected to face recognition, so that the face images with lower quality are screened out, the recognition amount of face recognition is reduced, the face recognition is performed only according to the face images with higher quality, and the recognition rate can be improved.
More specifically, when the processor 20 performs face recognition based on the face image, the processor 20 may be based on a reconfigurable computing unit to extract a face feature vector of the face image whose quality reaches a preset condition.
Next, the processor 20 may input the preset face feature vector in the preset face information database to the reconfigurable computing unit for the reconfigurable computing unit to calculate. Wherein the reconfigurable computing unit is a (Reconfigurable Computing Unit, RCU) device. The RCU device calculates the similarity between the feature vector of the face to be detected and the pre-stored feature vector, so that the workload of the processor 20 can be reduced, and the resource occupancy rate and time consumption of the embedded device 100 in the face recognition process can be reduced, thereby meeting the operation requirements of the embedded device 100 on other works.
In the face recognition method, the face recognition device 10 and the embedded equipment 100 of the embodiment of the invention, before face recognition is performed on the face images, the quality of the face images is detected based on a preset face screening model, only if the quality of the face images reaches a preset condition, the face recognition is performed based on a reconfigurable computing unit, namely, before the face images are subjected to face recognition, some face images with poor quality can be removed, so that the number of face images needing to be subjected to face recognition is reduced, the recognition efficiency is improved, the workload of the processor 20 can be reduced by the reconfigurable computing unit, and the resource occupation rate and time consumption of the embedded equipment 100 in the face recognition process are reduced, so that the running requirements of the embedded equipment 100 on other works are met.
Referring to fig. 2, 3 and 4, in some embodiments, step 01: detecting face information in the acquired image to generate a face image, comprising the steps of:
011: detecting the face position in the acquired image based on a preset face detection model;
012: and generating a face image according to the face position.
In some embodiments, the generating module 11 is configured to perform step 011 and step 012. Namely, the generating module 11 is configured to detect a face position in the acquired image based on a preset face detection model; and generating a face image according to the face position.
In certain embodiments, processor 20 is configured to perform steps 011 and 012. That is, the processor 20 is configured to detect a face position in the acquired image based on a preset face detection model; and generating a face image according to the face position.
Specifically, the face information includes a face position of the target face, that is, a face position in the image acquired by the camera 40. When the processor 20 detects face information in the captured image to generate a face image, it may be: the processor 20 detects the face position in the acquired image based on a preset face detection model. According to the above, the preset face detection model may be FaceBoxes face detection network, retinaface face detection network, and the like.
After the processor 20 obtains the image collected by the camera 40, a face position in the collected image may be obtained according to a preset face detection model, so as to generate a face image according to the face position.
In this way, the image data of the image collected by the camera 40, which does not belong to the face part, can be removed, so that the data amount required to be detected when the quality of the face image is detected is reduced, and the recognition efficiency is improved.
Referring to fig. 2, 3 and 5, the face recognition method in the embodiment of the present application further includes the steps of:
07: detecting the positions of face key points in a face image based on a preset face detection model; and
09: and aligning the face image according to the face position and the position of the face key point so as to adjust the gesture of the target face of the aligned face image to be a preset gesture.
Part of step 03: the quality of the face image is detected, comprising the following steps:
031: and detecting the quality of the aligned face image.
In some embodiments, the face recognition device 10 further comprises a second detection module 14 and an alignment module 15. The second detection module 14 is used to perform step 07. The alignment module 15 is used to perform step 09. The second detection module 14 is configured to perform step 031. That is, the second detection module 14 is configured to detect the positions of the face key points in the face image based on a preset face detection model. The alignment module 15 is configured to align the face image according to the face position and the position of the face key point, so that the pose of the target face of the aligned face image is adjusted to a preset pose. The first detection module 12 is configured to detect quality of the aligned face image.
In certain embodiments, processor 20 is configured to perform step 07, step 09, and step 031. That is, the processor 20 is configured to detect the positions of the face key points in the face image based on a preset face detection model; aligning the face image according to the face position and the position of the face key point so as to adjust the gesture of the target face of the aligned face image to be a preset gesture; and detecting the quality of the aligned face image.
Specifically, the face information also includes key points of the face. After generating the face image, the processor 20 may further detect the positions of the face key points in the face image according to a preset face detection model. The positions of the key points of the human face can be the positions of eyes, nose, mouth, ears and the like.
Thus, after the processor 20 obtains the face position and the position of the face key point, the face image is aligned according to the face position and the face key point position, so that the pose of the target face of the aligned face image is adjusted to be the preset pose. The preset gesture may be a face front view, a face side view, or the like. Preferably, in order to ensure that feature points in the face image are easily extracted, and in consideration of the face image being used for face recognition, the preset pose is a face elevation.
Specifically, the processor 20 may affine transform the face image according to the face position and the position of the face key point, so that the face angle faces the front, i.e., the face image is a face front view.
Further, the processor 20 detects the quality of the aligned face image when detecting the quality of the face image based on a preset face screening model.
Thus, the processor 20 can be ensured to have obvious characteristic points of the obtained face image when detecting the quality of the face image based on the preset face screening model, so that the accuracy is higher when detecting the quality of the face image.
Referring to fig. 2, 3 and 6, in some embodiments, part of step 011: detecting the position of a face key point in the acquired image, and further comprising the steps of:
0111: generating a plurality of candidate frames with preset sizes according to the sizes of the acquired images, and outputting the scores of each candidate frame;
0113: determining candidate frames with scores greater than a preset score as face frames;
0115: acquiring the coincidence degree between any two face frames with coincidence parts, and determining the face frame with the highest score as a target face frame in the face frames with the coincidence degree larger than the preset coincidence degree; and
0117: and outputting the position of the target face frame as the face position.
In some embodiments, the second detection module 14 is configured to perform steps 0111, 0113, 0115, and 0117. The second detection module 14 is configured to generate a plurality of candidate frames with preset sizes according to the sizes of the acquired images, and output a score of each candidate frame; determining candidate frames with scores greater than a preset score as face frames; acquiring the coincidence degree between any two face frames with coincidence parts, and determining the face frame with the highest score as a target face frame in the face frames with the coincidence degree larger than the preset coincidence degree; and outputting the position of the target face frame as the face position.
In some embodiments, processor 20 is configured to perform steps 0111, 0113, 0115, and 0117. That is, the processor 20 is configured to generate a plurality of candidate frames of a preset size according to the size of the acquired image, and output a score of each candidate frame; determining candidate frames with scores greater than a preset score as face frames; acquiring the coincidence degree between any two face frames with coincidence parts, and determining the face frame with the highest score as a target face frame in the face frames with the coincidence degree larger than the preset coincidence degree; and outputting the position of the target face frame as the face position.
Specifically, when the processor 20 detects a face position in an image based on a preset face detection model, the processor 20 may generate a plurality of candidate frames of a preset size according to the size of the captured image, and output a score for each candidate frame. Wherein. The preset size is proportional to the size of the captured image, i.e., the larger the size of the captured image, the larger the size of the candidate frame of the preset size.
As shown in fig. 7 (a), if the size of the acquired image P1 is 1600×900, the size S1 of the candidate frame is 200×150. As shown in fig. 7 (b), if the size of the acquired image P2 is 1920×1080, the size of the candidate frame S2 is 240×200.
When the processor 20 scores the plurality of candidate frames, the plurality of candidate frames may be scored based on a pre-trained face detection model. For example, the pre-trained face detection model includes scores corresponding to training samples of different qualities, and when the candidate frames are input into the pre-trained face detection model, the score of each candidate frame is obtained. For example, the more features of a face in a candidate box, the higher the score.
Next, the processor 20 may determine the candidate frames having a score greater than the preset score as face frames according to the score of each candidate frame. The preset score may be a manually set score, such as 80 score, 90 score, etc. It can be understood that the more the score is greater than the candidate frame of the preset score, the more the face features of the corresponding image within the candidate frame.
Further, the processor 20 may obtain the coincidence ratio between any two face frames with coincidence portions, so as to determine the target face frame according to the coincidence ratio and the score of the candidate frames.
More specifically, when the overlapping portions of any two face frames are more, the overlapping portions are larger. When the overlap ratio is greater than the preset overlap ratio, the processor 20 considers the face frame with the highest score of the two face frames as the target face frame. It can be understood that when the overlap ratio is greater than the preset overlap ratio, the more portions of the shared area of the two face frames are indicated.
And the target face frame is the face frame with the highest score among all the face frames with the overlapping degree larger than the preset overlapping degree determined by the processor 20.
Thus, the processor 20 can output the position of the target face frame as the face position. It can be understood that the position of the target face frame is the most accurate face position in the acquired image, so that the accuracy in detecting the quality of the face image can be ensured.
Referring to fig. 2, 3 and 8, in certain embodiments, step 05: based on the reconfigurable computing unit, the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector is computed so as to perform face recognition, and the method further comprises the steps of:
051: extracting features of the face image based on a preset feature extraction model to generate a face feature vector; and
053: and according to the face feature vector, a face information database is established.
In certain embodiments, the identification module 13 is configured to perform step 051 and step 053. That is, the recognition module 13 is configured to extract features of the face image based on a preset feature extraction model, so as to generate a face feature vector; and establishing a face information database according to the face feature vector.
In certain embodiments, the processor 20 is configured to perform step 051 and step 053. That is, the processor 20 is configured to extract features of the face image based on a preset feature extraction model to generate a face feature vector; and establishing a face information database according to the face feature vector.
Specifically, when performing face recognition based on the face image, the processor 20 may further extract features of the face image based on a preset feature extraction model to generate a face feature vector, and build a face information database according to the face feature vector.
The preset feature extraction model is an extraction model trained in advance, and a 72-layer convolutional neural network can be adopted, wherein the model comprises a convolutional layer, a pooling layer, an activation layer, a full connection layer and a loss layer. The loss function is a weighted sum of softmax-loss and center-loss, and the softmax-loss is used for improving the intra-class aggregation degree of the sample in the characteristic space; center-loss is used to increase the inter-class distance of the sample in the feature space.
In the face recognition method according to the embodiment of the present application, when performing face recognition, face information capable of performing face recognition needs to be stored in the face information database. Therefore, before face recognition, the user needs to input own face information. That is, a process of offline entering of face images is required.
The specific process is as shown in fig. 9: first, the camera 40 of the embedded device 100 performs image acquisition for detection by the processor 20. Face information (face position and face key point position) is detected at the processor 20 based on a preset face detection model to align the face images. Next, the processor 20 screens the model based on a preset face to detect the quality of the face. When the quality of the face image reaches the preset condition, the processor 20 extracts the features of the face image based on the reserved feature extraction model to generate a face feature vector, and establishes a face information database. When the quality of the face image does not reach the preset condition, the camera 40 prompts the user to re-capture the image, so as to repeat the above steps until the quality of the face image reaches the preset condition.
It can be understood that the face images which meet the quality are stored in the face information database, so that the face images which can be processed through face recognition can be guaranteed to be good in quality when the face recognition is performed, and the accuracy of the face recognition can be guaranteed.
Referring to fig. 2, 3 and 10, in certain embodiments, step 05: based on the reconfigurable computing unit, the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector is computed so as to perform face recognition, and the method further comprises the steps of:
055: extracting features of the face image based on a preset feature extraction model to generate a face feature vector to be detected;
057: inputting a pre-stored feature vector in a preset face information database to a reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining a target feature vector according to the similarity; and
059: and under the condition that the similarity between the target feature vector and the face feature vector to be detected is larger than a preset threshold value, determining that the face authentication is successful.
In certain embodiments, the identification module 13 is configured to perform step 055, step 057, and step 059. That is, the recognition module 13 is configured to extract features of the face image based on a preset feature extraction model, so as to generate a feature vector of the face to be detected; inputting a pre-stored feature vector in a preset face information database to a reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining a target feature vector according to the similarity; and determining that the face authentication is successful under the condition that the similarity between the target feature vector and the face feature vector to be detected is larger than a preset threshold value.
In certain embodiments, processor 20 is configured to perform step 055, step 057, and step 059. That is, the processor 20 is configured to extract features of the face image based on a preset feature extraction model, so as to generate a feature vector of the face to be detected; inputting a pre-stored feature vector in a preset face information database to a reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining a target feature vector according to the similarity; and determining that the face authentication is successful under the condition that the similarity between the target feature vector and the face feature vector to be detected is larger than a preset threshold value.
Specifically, when the processor 20 performs face recognition based on the face image, the processor 20 may extract features of the face image based on a preset feature extraction model, so as to generate a face feature vector to be detected.
Next, the processor 20 may input the pre-stored feature vectors in the preset face information database to the reconfigurable computing unit for the reconfigurable computing unit to calculate. Wherein the reconfigurable computing unit is a (Reconfigurable Computing Unit, RCU) device. The RCU device calculates the similarity between the feature vector of the face to be detected and the pre-stored feature vector, so that the workload of the processor 20 can be reduced, and the resource occupancy rate and time consumption of the embedded device 100 in the face recognition process can be reduced, thereby meeting the operation requirements of the embedded device 100 on other works.
More specifically, after the RCU device calculates the similarity between the face feature vector to be detected and the pre-stored feature vector, the processor 20 may determine the target feature vector according to the similarity. When the similarity between the target feature vector and the face feature vector to be detected is greater than the preset threshold, the processor 20 may determine that the face authentication is successful. It can be understood that the target feature vector is one feature vector or a plurality of feature vectors with similarity with the face feature vector to be detected being greater than a preset threshold value in all the pre-stored feature vectors.
The preset threshold value may be any value set manually, such as 90%, 95%, 98%, etc. When the similarity between the target feature vector and the face feature vector to be detected is greater than a preset threshold value, the face image is indicated to be capable of carrying out face recognition and successfully unlocking.
In this way, the processor 20 can calculate the similarity between the feature vector of the face to be detected and the pre-stored feature vector according to the RCU device, so as to avoid the work of the processor 20, thereby reducing the resource occupation of the face recognition work on the embedded device 100, reducing the time consumption of face recognition, and meeting the operation requirements of the embedded device 100 on other works.
Referring to fig. 2, 3 and 11, in some embodiments, step 057: inputting a pre-stored feature vector in a preset face information database to a reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, determining a target feature vector according to the similarity, and further comprising the steps of:
0571: dividing a pre-stored feature vector in a face information database into a plurality of feature sets according to the memory capacity of the reconfigurable computing unit, so that the memory occupied by each feature set is smaller than the memory capacity;
0573: sequentially inputting each feature set into a reconfigurable computing unit to compute the similarity of the face feature vector to be tested and the pre-stored feature vector of each feature set;
0575: and sorting according to the corresponding similarity of each pre-stored feature vector to determine the pre-stored feature vector of the preset sorting as a target feature vector.
In certain embodiments, the identification module 13 is configured to perform step 0571, step 0573, and step 0575. That is, the identification module 13 is configured to divide the pre-stored feature vectors in the face information database into a plurality of feature sets according to the memory capacity of the reconfigurable computing unit, so that the memory occupied by each feature set is smaller than the memory capacity; sequentially inputting each feature set into a reconfigurable computing unit to compute the similarity of the face feature vector to be tested and the pre-stored feature vector of each feature set; and sorting according to the corresponding similarity of each pre-stored feature vector to determine the pre-stored feature vector of the preset sorting as a target feature vector.
In certain embodiments, processor 20 is configured to perform step 0571, step 0573, and step 0575. That is, the processor 20 is configured to divide the pre-stored feature vectors in the face information database into a plurality of feature sets according to the memory capacity of the reconfigurable computing unit, so that the memory occupied by each feature set is smaller than the memory capacity; sequentially inputting each feature set into a reconfigurable computing unit to compute the similarity of the face feature vector to be tested and the pre-stored feature vector of each feature set; and sorting according to the corresponding similarity of each pre-stored feature vector to determine the pre-stored feature vector of the preset sorting as a target feature vector.
Specifically, when the reconfigurable computing unit (RCU device) computes the similarity between the feature vector of the face to be detected and the pre-stored feature vector, the pre-stored feature vector in the face information database may be divided into a plurality of feature sets according to the memory capacity of the RCU device.
More specifically, taking a total of N pre-stored feature vectors as an example, the processor 20 may divide the memory capacity of the RCU device into M parts (i.e. into M feature sets), where the number of pre-stored feature vectors in each part is (n+m-1)/M, so that the memory capacity of the RCU device can be guaranteed to be divided into M parts, the difference between the number of pre-stored feature vectors in each part is not larger, and the memory occupied by each feature set is smaller than the memory capacity.
Next, the processor 20 may sequentially input each feature set into the RCU device to calculate the similarity between the feature vector of the face to be detected and the pre-stored feature vector of each feature set. Because the memory capacity of the RCU device is smaller, in the calculation process, the RCU device can only calculate the similarity between the pre-stored feature vector in one feature set and the feature vector of the face to be detected, so that each feature set is sequentially input into the RCU device, and the RCU device can be ensured to calculate the similarity between the pre-stored feature vector in all feature sets and the feature vector of the face to be detected.
Finally, the processor 20 may obtain the similarity corresponding to each pre-stored feature vector, so as to sort the similarity of the pre-stored feature vectors, and take the pre-stored feature vector with the pre-set sorting as the target feature vector. Wherein, the processor 20 may sort from large to small according to the similarity corresponding to each pre-stored feature vector. While the preset ordering may be 5, 10, 15, etc. For example, when the preset ranking is 5, the first 5 pre-stored feature vectors of the pre-stored feature vectors are target feature vectors, that is, the 5 pre-stored feature vectors with the largest similarity are target feature vectors.
In some embodiments, after the first feature set is input to the RCU device, the RCU device may calculate a similarity between each pre-stored feature vector in the first feature set and the feature vector of the face to be detected, so as to obtain the similarity between K feature vectors of the face to be detected and the pre-stored feature vector in each feature set. K is the first K similarity after the similarity of the face feature vector to be detected and the pre-stored feature vector of each feature set is ordered from big to small, namely the K similarity is the K with larger similarity in the feature set.
Then, the processor inputs the second feature set to the RCU device, and the RCU may calculate the similarity between each pre-stored feature vector in the second feature set and the feature vector of the face to be detected to obtain K1 similarities again, compare the K similarities with the K similarities obtained in the first feature set, and update the K similarities to obtain updated K2 similarities. Wherein, K1 and K2 are equal to K, and it can be understood that the K2 similarities are K2 similarities with larger similarity after the similarity calculation of the first feature set and the second feature set.
And by analogy, after the processor inputs one feature set into the RCU equipment, calculating the similarity between the feature set and the feature vector of the face to be detected, comparing and updating the similarity with K similarities obtained by the previous feature set until all feature sets are input, and thus obtaining K similarities with larger similarity between the prestored feature vectors in all feature sets and the feature vector of the face to be detected. Namely, the prestored feature vectors corresponding to the K similarities are target feature vectors.
Referring to fig. 2, 3 and 12, in some embodiments, step 057: the method for calculating the similarity between the face feature vector to be measured and the pre-stored feature vector further comprises the following steps:
0577: calculating Euclidean distance between the face feature vector to be detected and the pre-stored feature vector; and
0579: and determining the similarity between the face feature vector to be detected and the pre-stored feature vector according to the Euclidean distance.
In certain embodiments, the identification module 13 is configured to perform step 0577 and step 0579. Namely, the recognition module 13 is used for calculating the Euclidean distance between the face feature vector to be detected and the pre-stored feature vector; and determining the similarity of the face feature vector to be detected and the pre-stored feature vector according to the Euclidean distance.
In certain embodiments, processor 20 is configured to perform step 0577 and step 0579. That is, the processor 20 is configured to calculate the euclidean distance between the feature vector of the face to be detected and the pre-stored feature vector; and determining the similarity of the face feature vector to be detected and the pre-stored feature vector according to the Euclidean distance.
Specifically, when the RCU device calculates the similarity between the face feature vector to be measured and the pre-stored feature vector, the euclidean distance between the face feature vector to be measured and the pre-stored feature vector can be calculated first, and then the similarity between the face feature vector to be measured and the pre-stored feature vector can be determined according to the euclidean distance.
The calculation formula of the Euclidean distance is shown in the following formula (1), formula (2) and formula (3):
the formula (1) is a two-dimensional space calculation formula, the formula (2) is a three-dimensional space calculation formula, the formula (3) is an N-dimensional space calculation formula, and specific numbers of the face feature vector to be detected and the pre-stored feature vector are used for selecting corresponding formulas to calculate. ρ, d (x, y) are specific values of Euclidean distance, (x) 1 ,y 1 ) Is the coordinates of the feature vector of the face to be detected, (x) 2 ,y 1 ) The coordinates of the feature vector are pre-stored.
More specifically, the smaller the Euclidean distance is, the greater the similarity between the face feature vector to be detected and the pre-stored feature vector is. Therefore, after the Euclidean distance between the face feature vector to be detected and the pre-stored feature vector is calculated, one or more pre-stored feature vectors with larger similarity with the face feature vector to be detected in all the pre-stored feature vectors can be judged according to the Euclidean distance.
In one embodiment, the RCU device may sequentially calculate the euclidean distance between the face feature vector to be detected in each feature set and the pre-stored feature vector, and only store K face feature vectors to be detected with smaller euclidean distance. And then comparing the O face vectors to be detected in each feature set in turn to obtain O face feature vectors to be detected with smaller Euclidean distance in all feature sets. Wherein, K and O are positive integers which are arbitrarily larger than 0, and K and O can be equal.
In the face recognition method of the embodiment of the present application, as shown in fig. 13, in the process of performing face recognition by using the embedded device 100, the processor 20 first obtains a real-time image of the user through the camera 40, and performs face detection on the real-time image to obtain a face image. And then aligning the face images according to the face positions and the face key point positions, so as to screen the face images according to a preset face screening model, namely quality detection. When the quality reaches the preset condition, extracting the face characteristics of the face image; when the quality does not reach the preset condition, the embedded device 100 may end face recognition, and may also remind the user of face recognition failure. After extracting the features of the face image according to the preset feature extraction model to generate a feature vector of the face to be detected, the processor 20 may calculate, according to the RCU device, the similarity between the feature vector to be detected and the pre-stored feature vector in the face information database, so as to obtain a target feature vector with relatively similar pre-stored feature vector, i.e. return the face similarity ID. And when the similarity between the target feature vector and the face feature vector to be detected meets a preset threshold, determining that the face authentication is successful.
Referring to fig. 14, embodiments of the present application also provide a non-transitory computer readable storage medium 300 containing a computer program 301. The computer program 301, when executed by the one or more processors 20, causes the one or more processors 20 to perform the face recognition method of any of the embodiments described above.
For example, the computer program 301, when executed by the one or more processors 20, causes the processors 20 to perform the following face recognition method:
01: detecting face information in the acquired image to generate a face image;
03: detecting the quality of a face image based on a preset face screening model; and
05: and based on the reconfigurable computing unit, computing the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector so as to conduct face recognition.
As another example, the computer program 301, when executed by the one or more processors 20, causes the processors 20 to perform the following face recognition method:
07: detecting the positions of face key points in a face image based on a preset face detection model; and
09: and aligning the face image according to the face position and the position of the face key point so as to adjust the gesture of the target face of the aligned face image to be a preset gesture.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present application.
Claims (18)
1. A face recognition method, comprising:
detecting face information in the acquired image to generate a face image;
detecting the quality of the face image based on a preset face screening model; and
And based on the reconfigurable computing unit, computing the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector so as to conduct face recognition.
2. The face recognition method according to claim 1, wherein the face information includes a face position of a target face, and the detecting acquires the face information in the image to generate the face image includes:
detecting the face position in the acquired image based on a preset face detection model;
and generating the face image according to the face position.
3. The face recognition method according to claim 2, wherein the face information further includes a face key point, the face recognition method further comprising:
Detecting the positions of the face key points in the face image based on a preset face detection model;
aligning the face image according to the face position and the position of the face key point so that the posture of the target face of the aligned face image is adjusted to be a preset posture;
the detecting the quality of the face image includes:
and detecting the quality of the face image after alignment.
4. The face recognition method according to claim 2, wherein the detecting the face position in the captured image includes:
generating a plurality of candidate frames with preset sizes according to the sizes of the acquired images, and outputting the scores of the candidate frames;
determining the candidate frames with scores greater than a preset score as face frames;
acquiring the coincidence degree between any two face frames with coincidence parts, and determining the face frame with the highest score as a target face frame in the face frames with the coincidence degree larger than the preset coincidence degree;
and outputting the position of the target face frame as the face position.
5. The face recognition method according to claim 1, wherein the calculating, based on the reconfigurable calculating unit, the similarity between the face feature vector of the face image having the quality reaching the preset condition and the preset face feature vector to perform face recognition includes:
Extracting the features of the face image based on a preset feature extraction model to generate a face feature vector;
and establishing a face information database according to the face feature vector.
6. The face recognition method according to claim 1, wherein the calculating, based on the reconfigurable calculating unit, the similarity between the face feature vector of the face image having the quality reaching the preset condition and the preset face feature vector to perform face recognition includes:
extracting the features of the face image based on a preset feature extraction model to generate a face feature vector to be detected;
inputting a pre-stored feature vector in a preset face information database to the reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining a target feature vector according to the similarity;
and under the condition that the similarity between the target feature vector and the face feature vector to be detected is larger than a preset threshold value, determining that the face authentication is successful.
7. The face recognition method according to claim 6, wherein inputting the pre-stored feature vector in the pre-set face information database to the reconfigurable computing unit to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining the target feature vector according to the similarity, comprises:
Dividing the pre-stored feature vectors in the face information database into a plurality of feature sets according to the memory capacity of the reconfigurable computing unit, so that the memory occupied by each feature set is smaller than the memory capacity;
sequentially inputting each feature set into the reconfigurable computing unit to compute the similarity of the face feature vector to be tested and the pre-stored feature vector of each feature set;
and sorting according to the corresponding similarity of each pre-stored feature vector to determine the pre-stored feature vector with preset sorting as the target feature vector.
8. The method of claim 6, wherein said calculating the similarity between the feature vector of the face to be detected and the pre-stored feature vector comprises:
calculating Euclidean distance between the face feature vector to be detected and the pre-stored feature vector; and
And determining the similarity of the face feature vector to be detected and the pre-stored feature vector according to the Euclidean distance.
9. A face recognition device, comprising:
the generation module is used for detecting face information in the acquired image to generate a face image;
The first detection module is used for detecting the quality of the face image based on a preset face screening model; and
The recognition module is used for calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculation unit so as to carry out face recognition.
10. An embedded device, comprising a processor for detecting face information in an acquired image to generate a face image; detecting the quality of the face image based on a preset face screening model; and calculating the similarity between the face feature vector of the face image with the quality reaching the preset condition and the preset face feature vector based on the reconfigurable calculating unit so as to carry out face recognition.
11. The embedded device of claim 10, wherein the face information includes a face position of a target face, and the processor is configured to detect the face position in the face image based on a preset face detection model; and generating the face image according to the face position.
12. The embedded device of claim 11, wherein the face information further comprises a face key point, and the processor is configured to detect a location of the face key point in the face image based on a preset face detection model; aligning the face image according to the face position and the position of the face key point so that the posture of the target face of the aligned face image is adjusted to be a preset posture; and detecting the quality of the face image after alignment.
13. The embedded device of claim 11, wherein the processor is configured to generate a plurality of candidate frames of a preset size according to the size of the captured image, and output a score for each of the candidate frames; determining the candidate frames with scores greater than a preset score as face frames; acquiring the coincidence degree between any two face frames with coincidence parts, and determining the face frame with the highest score as a target face frame in the face frames with the coincidence degree larger than the preset coincidence degree; and outputting the position of the target face frame as the face position.
14. The embedded device of claim 10, wherein the processor is configured to extract features of the face image based on a preset feature extraction model to generate an acquired face feature vector; and establishing a face information database according to the collected face feature vectors.
15. The embedded device of claim 10, wherein the processor is configured to extract features of the face image based on a preset feature extraction model to generate a face feature vector to be detected; inputting a pre-stored feature vector in a preset face information database to the reconfigurable computing unit so as to calculate the similarity between the face feature vector to be detected and the pre-stored feature vector, and determining a target feature vector according to the similarity; and under the condition that the similarity between the target feature vector and the face feature vector to be detected is larger than a preset threshold value, determining that the face authentication is successful.
16. The embedded device of claim 15, wherein the processor is configured to divide the pre-stored feature vectors in the face information database into a plurality of feature sets according to a memory capacity of the reconfigurable computing unit, such that memory occupied by each of the feature sets is smaller than the memory capacity; sequentially inputting each feature set into the reconfigurable computing unit to compute the similarity of the face feature vector to be tested and the pre-stored feature vector of each feature set; and sorting according to the corresponding similarity of each pre-stored feature vector to determine the pre-stored feature vector with preset sorting as the target feature vector.
17. The embedded device of claim 15, wherein the processor is further configured to calculate a euclidean distance between the face feature vector to be measured and the pre-stored feature vector; and determining the similarity of the face feature vector to be detected and the pre-stored feature vector according to the Euclidean distance.
18. A non-transitory computer readable storage medium, storing a computer program which, when executed by one or more processors, performs the face recognition method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310265736.0A CN116229556A (en) | 2023-03-13 | 2023-03-13 | Face recognition method and device, embedded equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310265736.0A CN116229556A (en) | 2023-03-13 | 2023-03-13 | Face recognition method and device, embedded equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116229556A true CN116229556A (en) | 2023-06-06 |
Family
ID=86575084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310265736.0A Pending CN116229556A (en) | 2023-03-13 | 2023-03-13 | Face recognition method and device, embedded equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116229556A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117611516A (en) * | 2023-09-04 | 2024-02-27 | 北京智芯微电子科技有限公司 | Image quality evaluation, face recognition, label generation and determination methods and devices |
-
2023
- 2023-03-13 CN CN202310265736.0A patent/CN116229556A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117611516A (en) * | 2023-09-04 | 2024-02-27 | 北京智芯微电子科技有限公司 | Image quality evaluation, face recognition, label generation and determination methods and devices |
CN117611516B (en) * | 2023-09-04 | 2024-09-13 | 北京智芯微电子科技有限公司 | Image quality evaluation, face recognition, label generation and determination methods and devices |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117803B (en) | Face image clustering method and device, server and storage medium | |
US11062123B2 (en) | Method, terminal, and storage medium for tracking facial critical area | |
CN109657533B (en) | Pedestrian re-identification method and related product | |
CN112001932B (en) | Face recognition method, device, computer equipment and storage medium | |
US8750573B2 (en) | Hand gesture detection | |
US8792722B2 (en) | Hand gesture detection | |
CN109829448B (en) | Face recognition method, face recognition device and storage medium | |
CN109145717B (en) | Face recognition method for online learning | |
CN102375970B (en) | A kind of identity identifying method based on face and authenticate device | |
WO2019080203A1 (en) | Gesture recognition method and system for robot, and robot | |
CN110287772B (en) | Method and device for extracting palm and palm center area of plane palm | |
EP3647992A1 (en) | Face image processing method and apparatus, storage medium, and electronic device | |
CN110263603B (en) | Face recognition method and device based on central loss and residual error visual simulation network | |
CN107370942A (en) | Photographic method, device, storage medium and terminal | |
CN111626163B (en) | Human face living body detection method and device and computer equipment | |
CN102938065A (en) | Facial feature extraction method and face recognition method based on large-scale image data | |
CN108986137B (en) | Human body tracking method, device and equipment | |
CN109815823B (en) | Data processing method and related product | |
CN106471440A (en) | Eye tracking based on efficient forest sensing | |
WO2014180108A1 (en) | Systems and methods for matching face shapes | |
CN113298158A (en) | Data detection method, device, equipment and storage medium | |
CN116229556A (en) | Face recognition method and device, embedded equipment and computer readable storage medium | |
CN112766065A (en) | Mobile terminal examinee identity authentication method, device, terminal and storage medium | |
CN112001285A (en) | Method, device, terminal and medium for processing beautifying image | |
CN109961103B (en) | Training method of feature extraction model, and image feature extraction method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |