CN110866466A - Face recognition method, face recognition device, storage medium and server - Google Patents

Face recognition method, face recognition device, storage medium and server Download PDF

Info

Publication number
CN110866466A
CN110866466A CN201911042888.4A CN201911042888A CN110866466A CN 110866466 A CN110866466 A CN 110866466A CN 201911042888 A CN201911042888 A CN 201911042888A CN 110866466 A CN110866466 A CN 110866466A
Authority
CN
China
Prior art keywords
face
image
feature
target
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911042888.4A
Other languages
Chinese (zh)
Other versions
CN110866466B (en
Inventor
曾平安
周超勇
彭旭
王婷如
刘玉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201911042888.4A priority Critical patent/CN110866466B/en
Priority to PCT/CN2019/118567 priority patent/WO2021082087A1/en
Publication of CN110866466A publication Critical patent/CN110866466A/en
Application granted granted Critical
Publication of CN110866466B publication Critical patent/CN110866466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Abstract

The application is applicable to the technical field of computers, and provides a face recognition method, a face recognition device, a storage medium and a server. According to the face recognition method, on one hand, the face images with the image quality scores smaller than the preset threshold value in all the face images of the image to be recognized are filtered, namely, the face images which are easy to cause false recognition due to poor image quality are filtered, so that the false recognition rate of face recognition can be effectively reduced. On the other hand, when the features of the face image are matched, the multi-dimensional features fusing the face features and the face attribute features are adopted for matching, and compared with the traditional mode of matching only by adopting the face features, the accuracy of face recognition is greatly improved.

Description

Face recognition method, face recognition device, storage medium and server
Technical Field
The present application belongs to the field of computer technologies, and in particular, to a face recognition method, an apparatus, a storage medium, and a server.
Background
The identity of a person can be determined by extracting and identifying the features of the face, so that the method has great application in industries such as security protection, finance and the like, and can bring better safety guarantee for enterprises. However, the traditional face recognition system has poor robustness, and factors of image quality such as illumination, angle, ambiguity, age aging and the like in a face image also influence the face recognition effect, resulting in a higher face misrecognition rate.
Disclosure of Invention
In view of this, the present application provides a face recognition method, which can effectively improve the accuracy of face recognition and reduce the false recognition rate.
In a first aspect, an embodiment of the present application provides a face recognition method, including:
acquiring an image to be recognized, wherein the image to be recognized comprises more than one face image;
intercepting each face image contained in the image to be recognized;
inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain the image quality score of each face image, wherein the image quality detection model is a neural network model obtained by training a plurality of sample face images with pre-labeled image quality scores as a training set;
filtering the face images with the image quality scores smaller than a preset threshold value in all the face images, and taking the rest face images as target face images to be recognized;
inputting the target face image into a face attribute detection model which is constructed in advance to obtain the face attribute characteristics of the target face image, wherein the face attribute detection model is a neural network model which is obtained by training a plurality of sample face images with known face attribute characteristics as a training set;
extracting the face features of the target face image;
fusing the face features and face attribute features of the target face image to obtain multi-dimensional features of the target face image;
and respectively matching the multidimensional characteristics of the target face image with the multidimensional characteristics of each face image with known identity in a preset image database to obtain a face recognition result of the target face image.
According to the face recognition method provided by the embodiment of the application, on one hand, the face images with the image quality scores smaller than the preset threshold value in the face images of the images to be recognized are filtered, namely, the part of the face images which are easy to cause false recognition due to poor image quality is filtered, so that the false recognition rate of face recognition can be effectively reduced. On the other hand, when the features of the face image are matched, the multi-dimensional features fusing the face features and the face attribute features are adopted for matching, and compared with the traditional mode of matching only by adopting the face features, the accuracy of face recognition is greatly improved.
Further, the face attribute detection model includes a face gender detection model, a face age detection model and a face race detection model, and the obtaining of the face attribute characteristics of the target face image by inputting the target face image into the face attribute detection model constructed in advance may include:
inputting the target face image into the face gender detection model, the face age detection model and the face race detection model to respectively obtain a face gender feature, a face age feature and a face race feature of the target face image;
and fusing the face gender characteristic, the face age characteristic and the face ethnicity characteristic to obtain the face attribute characteristic of the target face image.
When the face attribute features are obtained, 3 models of a face gender detection model, a face age detection model and a face race detection model can be specifically constructed, the face gender features, the face age features and the face race features of the target face image are respectively detected, and then the 3 detected attribute features are fused to serve as the final face attribute features.
Further, the fusing the face gender feature, the face age feature and the face ethnicity feature may include:
performing an L1 normalization operation on the face gender feature, the face age feature and the face ethnicity feature;
and fusing the normalized face gender characteristic, the normalized face age characteristic and the normalized face ethnicity characteristic.
Because 3 types of features such as the gender, the age and the race of the face are greatly different, if normalization is not performed, the 3 types of features are fused to cause the effect of subsequent face recognition to be poor. Namely, the 3 types of features are normalized and then fused, so that the accuracy of subsequent face recognition is improved.
Further, the fusing the normalized face gender feature, the face age feature and the face ethnicity feature may include:
inputting the normalized face gender feature, the normalized face age feature and the normalized face ethnicity feature into a pre-constructed attribute feature selection model, and determining a target feature fusion mode according to an output result of the attribute feature selection model;
selecting and fusing target attribute features from the normalized human face gender features, the normalized human face age features and the normalized human face ethnicity features according to the target feature fusion mode;
the attribute feature selection model is constructed by the following steps:
acquiring a plurality of model sample face images of known identities of known face gender characteristics, face age characteristics and face ethnicity characteristics;
constructing a plurality of feature fusion modes by using the face gender feature, the face age feature and the face ethnicity feature of the model sample face image according to a traversal mode, wherein each feature fusion mode comprises more than one face attribute feature of the face gender feature, the face age feature and the face ethnicity feature;
respectively fusing the human face attribute characteristics contained in each characteristic fusion mode and the human face characteristics of the model sample human face image, then identifying the human face, and counting the identification accuracy rate corresponding to each characteristic fusion mode;
associating the face gender characteristics, the face age characteristics and the face ethnicity characteristics of the model sample image with the characteristic fusion mode with the highest recognition accuracy;
and training to obtain the attribute feature selection model by taking the model sample image in a face attribute feature associated feature fusion mode as a training set.
In order to further improve the accuracy of face recognition, an attribute feature selection model can be constructed in advance, and the model is used for selecting and fusing attribute features favorable for executing face recognition, namely automatically selecting and fusing all the attribute features to obtain the face attribute features most favorable for executing face recognition. The model matches the input normalized face gender, face age and face race with the face gender, face age and face race of each sample image, finds out a sample image with the highest similarity, and outputs the optimal fusion mode corresponding to the sample image, thereby determining and fusing the selected target attribute characteristics.
Further, after extracting the face features of the target face image, before fusing the face features and the face attribute features of the target face image, the method may further include:
and adjusting the face characteristics of the target face image according to the target attribute characteristics.
The specific adjustment mode may include:
determining a face feature region associated with the target attribute feature;
and improving the proportion of the human face features in the human face feature region in the target human face image.
In order to improve the accuracy of face recognition, the face features of the target face image can be adjusted according to the selected target attribute features. For example, if the target attribute feature is a male, the feature proportion of the male part in the face features is increased, for example, the proportion of the urheen part in the face features is increased.
Specifically, the intercepting of each face image included in the image to be recognized may include:
carrying out face detection on the image to be recognized by adopting a face detection algorithm;
determining the position coordinates of each face image contained in the image to be recognized according to the face detection result;
and intercepting the image of the area where the position coordinates are located from the image to be identified to obtain each face image.
And detecting the image to be recognized by adopting a face detection algorithm to obtain the coordinates of the face in the image, and then performing face crop operation to obtain an intercepted face image, wherein the aim of the operation is to remove the background information of the face image and reduce noise interference.
In a second aspect, an embodiment of the present application provides a face recognition apparatus, including:
the image acquisition module is used for acquiring an image to be recognized, wherein the image to be recognized comprises more than one face image;
the face intercepting module is used for intercepting each face image contained in the image to be recognized;
the image quality detection module is used for respectively inputting each intercepted human face image into a pre-constructed image quality detection model to obtain the image quality score of each human face image, and the image quality detection model is a neural network model obtained by training a plurality of sample human face images with pre-labeled image quality scores as a training set;
the face filtering module is used for filtering the face images of which the image quality scores are smaller than a preset threshold value in all the face images, and the rest face images are used as target face images to be recognized;
the human face attribute detection module is used for inputting the target human face image into a human face attribute detection model which is constructed in advance to obtain the human face attribute characteristics of the target human face image, and the human face attribute detection model is a neural network model which is obtained by taking a plurality of sample human face images with known human face attribute characteristics as a training set for training;
the face feature extraction module is used for extracting the face features of the target face image;
the characteristic fusion module is used for fusing the face characteristic and the face attribute characteristic of the target face image to obtain the multi-dimensional characteristic of the target face image;
and the face matching module is used for respectively matching the multi-dimensional features of the target face image with the multi-dimensional features of each face image with known identity in a preset image database to obtain a face recognition result of the target face image.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the face recognition method as set forth in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the face recognition method as set forth in the first aspect of the embodiment of the present application.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the face recognition method according to the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a first embodiment of a face recognition method according to an embodiment of the present application;
fig. 2 is a flowchart of a second embodiment of a face recognition method according to an embodiment of the present application;
fig. 3 is a flowchart of a face recognition method according to a third embodiment of the present application;
fig. 4 is a block diagram of an embodiment of a face recognition apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail. Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
The application provides a face recognition method, which can effectively improve the accuracy of face recognition and reduce the false recognition rate.
It should be understood that the execution subject of the face recognition method proposed in the embodiments of the present application is a server.
Referring to fig. 1, a first embodiment of a face recognition method in the embodiment of the present application includes:
101. acquiring an image to be recognized, wherein the image to be recognized comprises more than one face image;
firstly, an image to be recognized is obtained, and the image to be recognized comprises more than one face image. Specifically, the camera may be used to capture an image of a certain designated area, for example, capture images of each person waiting for business handling in a bank lobby, as the image to be identified.
102. Intercepting each face image contained in the image to be recognized;
after the image to be recognized is obtained, all face images contained in the image to be recognized are intercepted. Specifically, various face detection algorithms can be adopted to detect faces from the image to be recognized and intercept the detected face image, so that background information of non-face areas in the image can be removed, and noise interference of subsequent face recognition execution is reduced.
103. Inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain an image quality score of each face image;
then, inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain the image quality score of each face image, wherein the image quality detection model is a neural network model obtained by training a plurality of sample face images with pre-labeled image quality scores (such as 0-100) as a training set. The model outputs a score of the quality of the face image by detecting image quality parameters such as exposure rate, dark light rate, shielding degree, large deflection angle, blurring degree and the like of the face image, wherein the higher the score is, the better the quality of the image is. Specifically, the model matches the image quality parameters of the face image of the input model with the image quality parameters of each sample face image to obtain the similarity, finds the sample face image with the highest similarity, and determines the image quality score of the face image of the input model according to the image quality score corresponding to the sample face image.
104. Filtering the face images with the image quality scores smaller than a preset threshold value in all the face images, and taking the rest face images as target face images to be recognized;
then, filtering out facial images with image quality scores smaller than a preset threshold (such as 60) from each intercepted facial image, wherein the facial images are poor in image quality and easy to generate false recognition, so that filtering is achieved. After filtering, the rest is the face image with better image quality, which is used as the target face image of the subsequent processing. Specifically, the threshold value may be set according to the requirement of the current application situation on the accuracy of the face image, for example, if the accuracy of face recognition is required to be high and false recognition is not generated, a higher threshold value may be set.
105. Inputting the target face image into a face attribute detection model which is constructed in advance to obtain the face attribute characteristics of the target face image;
after the target face image with better image quality is selected, the target face image is input into a face attribute detection model which is constructed in advance, and the face attribute characteristics of the target face image are obtained. The face attribute feature is a feature for representing a relevant attribute of a face, such as an age, a gender, a race, an expression, and the like. Specifically, the face attribute detection model is a neural network model obtained by training a plurality of sample face images with known face attribute characteristics as a training set, for example, a deep neural network model for detecting the gender of a face, a face image is input, and the gender of the face image is output; or a deep neural network model for detecting the age of the face, inputting a face image and outputting the age of the face image. In addition, the face attribute detection model may also include a plurality of sub-models, each of which respectively detects different face attributes, so that a plurality of different face attribute features of the target face image may be obtained.
106. Extracting the face features of the target face image;
next, the face features of the target face image are extracted, and the face features may be features of five sense organs, hair or accessories of a person.
107. Fusing the face features and face attribute features of the target face image to obtain multi-dimensional features of the target face image;
then, the face features and the face attribute features of the target face image are fused to obtain the multi-dimensional features of the target face image, namely the multi-dimensional features comprising the face features and the face attribute features. In terms of data representation, assuming that the extracted face features are 128-dimensional data, and the face attribute features are 3-dimensional (including gender 1-dimensional, age 1-dimensional, race 1-dimensional) data, the two are fused to obtain 131-dimensional data, i.e., multidimensional features.
108. And respectively matching the multidimensional characteristics of the target face image with the multidimensional characteristics of each face image with known identity in a preset image database to obtain a face recognition result of the target face image.
And finally, respectively matching the multidimensional characteristics of the target face image with the multidimensional characteristics of each face image with known identity in a preset image database to obtain a face recognition result of the target face image, namely, the identity of each target face image is recognized. Compared with the traditional mode of only adopting the human face features for comparison, the accuracy of the human face recognition can be further improved by adopting the multi-dimensional features for comparison.
One practical application scenario of the embodiment of the application is identification of personnel in a bank hall, an image of the bank hall is shot through a camera to serve as an image to be identified, and the image to be identified comprises a plurality of face images; the face attribute characteristics of the face images are obtained and fused with the face characteristics to obtain multi-dimensional characteristics, and then the multi-dimensional characteristics are compared with the multi-dimensional characteristics of the faces of the clients stored in the bank database system, so that the identities corresponding to the face images are identified. Because the bank has high requirement on the accuracy rate of identity recognition, the bank can not perform recognition and can not generate false recognition, and therefore the face image with poor image quality can be filtered in the processing process.
According to the face recognition method provided by the embodiment of the application, on one hand, the face images with the image quality scores smaller than the preset threshold value in the face images of the images to be recognized are filtered, namely, the part of the face images which are easy to cause false recognition due to poor image quality is filtered, so that the false recognition rate of face recognition can be effectively reduced. On the other hand, when the features of the face image are matched, the multi-dimensional features fusing the face features and the face attribute features are adopted for matching, and compared with the traditional mode of matching only by adopting the face features, the accuracy of face recognition is greatly improved.
Referring to fig. 2, a second embodiment of a face recognition method in the embodiment of the present application includes:
201. acquiring an image to be recognized, wherein the image to be recognized comprises more than one face image;
202. intercepting each face image contained in the image to be recognized;
203. inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain an image quality score of each face image;
204. filtering the face images with the image quality scores smaller than a preset threshold value in all the face images, and taking the rest face images as target face images to be recognized;
the steps 201-204 are the same as the steps 101-104, and the related description of the steps 101-104 can be referred to specifically.
205. Inputting the target face image into a face gender detection model, a face age detection model and a face race detection model to respectively obtain a face gender characteristic, a face age characteristic and a face race characteristic of the target face image;
in the embodiment of the application, 3 deep neural network models of a face gender detection model, a face age detection model and a face race detection model are constructed in advance and are respectively used for detecting the face gender characteristics, the face age characteristics and the face race characteristics of the target face image. The construction mode and the working principle of the face gender detection model, the face age detection model and the face race detection model can refer to the prior art.
206. Fusing the face gender characteristic, the face age characteristic and the face ethnicity characteristic to obtain a face attribute characteristic of the target face image;
and then, fusing the face gender characteristic, the face age characteristic and the face ethnicity characteristic to obtain the face attribute characteristic of the target face image. The data fusion refers to combining a plurality of different data into a multidimensional data, for example, performing data fusion on the face gender (1 dimension), the face age (1 dimension) and the face race (1 dimension) to obtain the face attribute feature (3 dimensions).
Specifically, step 206 may include:
(1) performing an L1 normalization operation on the face gender feature, the face age feature and the face ethnicity feature;
(2) and fusing the normalized face gender characteristic, the normalized face age characteristic and the normalized face ethnicity characteristic.
Because 3 types of features such as the gender, the age and the race of the face are greatly different, if normalization is not performed, the 3 types of features are fused to cause the effect of subsequent face recognition to be poor. Namely, the 3 types of features are normalized and then fused, so that the accuracy of subsequent face recognition is improved.
Further, the step (2) may include:
(2.1) inputting the normalized face gender feature, the normalized face age feature and the normalized face ethnicity feature into a pre-constructed attribute feature selection model, and determining a target feature fusion mode according to an output result of the attribute feature selection model;
and (2.2) selecting target attribute features from the normalized human face gender features, the normalized human face age features and the normalized human face ethnicity features according to the target feature fusion mode and fusing the target attribute features.
The attribute feature selection model is constructed by the following steps:
(a) acquiring a plurality of model sample face images of known identities of known face gender characteristics, face age characteristics and face ethnicity characteristics;
(b) constructing a plurality of feature fusion modes by using the face gender feature, the face age feature and the face ethnicity feature of the model sample face image according to a traversal mode, wherein each feature fusion mode comprises more than one face attribute feature of the face gender feature, the face age feature and the face ethnicity feature;
(c) respectively fusing the human face attribute characteristics contained in each characteristic fusion mode and the human face characteristics of the model sample human face image, then identifying the human face, and counting the identification accuracy rate corresponding to each characteristic fusion mode;
(d) associating the face gender characteristics, the face age characteristics and the face ethnicity characteristics of the model sample image with the characteristic fusion mode with the highest recognition accuracy;
(e) and training to obtain the attribute feature selection model by taking the model sample image in a face attribute feature associated feature fusion mode as a training set.
In order to further improve the accuracy of face recognition, an attribute feature selection model can be constructed in advance, and the model is used for selecting and fusing attribute features favorable for executing face recognition, namely automatically selecting and fusing all the attribute features to obtain the face attribute features most favorable for executing face recognition. Specifically, a large number of face sample images of the same identity of known face gender, face age and face race can be collected in advance, in the process of identifying the sample images, a certain number of attribute features are selected according to a permutation and combination mode to be fused (such as face gender + face age, face gender + face race …, and the like), the face identification accuracy of the sample images in each feature fusion mode is respectively counted, a feature fusion mode with the highest face identification accuracy is found, and the feature fusion mode is associated with the face gender, face age and face race values of the sample images before the sample images are not fused. Then, the attribute feature selection model is obtained by training the face sample images by taking the face gender, the face age and the face race and a corresponding feature fusion mode as a training set. The model matches the input normalized face gender, face age and face race with the face gender, face age and face race of each sample image, finds out a sample image with the highest similarity, and outputs the optimal feature fusion mode corresponding to the sample image, thereby determining and fusing the selected target attribute features.
207. Extracting the face features of the target face image;
208. fusing the face features and face attribute features of the target face image to obtain multi-dimensional features of the target face image;
further, after extracting the face features of the target face image, before fusing the face features and the face attribute features of the target face image, the method may further include:
and adjusting the face characteristics of the target face image according to the target attribute characteristics.
When the face attribute feature fusion is performed in step 206, if the fused target attribute feature is determined by the attribute feature selection model, in order to further improve the accuracy of face recognition, the face feature of the target face image may be adjusted according to the target attribute feature, and the specific adjustment manner may be: determining a face feature region associated with the target attribute feature; and improving the proportion of the human face features in the human face feature region in the target human face image. For example, if the target attribute feature is a male, the feature proportion of the male part in the face features is increased, for example, the proportion of the urheen part in the face features is increased. By improving the proportion of certain specific features, the specific features can be mainly compared during subsequent face feature matching, so that the interference of other non-key face features is relatively weakened, and the accuracy of face recognition is further improved.
209. And respectively matching the multidimensional characteristics of the target face image with the multidimensional characteristics of each face image with known identity in a preset image database to obtain a face recognition result of the target face image.
Step 207-.
The method comprises the steps of inputting a target face image into a face gender detection model, a face age detection model and a face ethnicity detection model to respectively obtain a face gender characteristic, a face age characteristic and a face ethnicity characteristic of the target face image; and then fusing the face gender characteristic, the face age characteristic and the face ethnicity characteristic to obtain the face attribute characteristic of the target face image. In addition, when the face attribute features are fused, the optimal target attribute features can be selected from the normalized face gender features, the normalized face age features and the normalized face ethnicity features through the attribute feature selection model for fusion, so that the accuracy of the subsequent face recognition can be effectively improved.
Referring to fig. 3, a third embodiment of a face recognition method in the embodiment of the present application includes:
301. acquiring an image to be recognized, wherein the image to be recognized comprises more than one face image;
step 301 is the same as step 101, and specific reference may be made to the related description of step 101.
302. Carrying out face detection on the image to be recognized by adopting a face detection algorithm;
after the image to be recognized is obtained, face detection is carried out on the image to be recognized by adopting a face detection algorithm. Specifically, various existing face detection algorithms can be adopted to determine whether the image to be recognized has a face and the position of each face in the image.
303. Determining the position coordinates of each face image contained in the image to be recognized according to the face detection result;
after the result of the face detection is obtained, each face image in the image to be recognized can be located, that is, the position coordinates of each face image are obtained.
304. Intercepting the image of the area where the position coordinates are located from the image to be recognized to obtain each face image;
and then, intercepting the image of the area where the position coordinates are located from the image to be identified to obtain each face image. Namely, the human face crop operation is executed to obtain the intercepted human face image, and the purpose of doing so is to remove the background information of the human face image and reduce noise interference.
305. Inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain an image quality score of each face image;
306. filtering the face images with the image quality scores smaller than a preset threshold value in all the face images, and taking the rest face images as target face images to be recognized;
307. inputting the target face image into a face attribute detection model which is constructed in advance to obtain the face attribute characteristics of the target face image;
308. extracting the face features of the target face image;
309. fusing the face features and face attribute features of the target face image to obtain multi-dimensional features of the target face image;
310. and respectively matching the multidimensional characteristics of the target face image with the multidimensional characteristics of each face image with known identity in a preset image database to obtain a face recognition result of the target face image.
The steps 305-310 are the same as the steps 103-108, and specific reference may be made to the related description of the steps 103-108.
Compared with the first embodiment of the present application, the present embodiment provides a specific way of intercepting each face image included in the image to be recognized. The human face is detected from the image to be recognized and the detected human face image is intercepted, so that the background information of a non-human face area in the image can be removed, and the noise interference of subsequent human face recognition execution is reduced.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 shows a block diagram of a face recognition apparatus provided in the embodiment of the present application, which corresponds to the face recognition method described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 4, the apparatus includes:
the image acquiring module 401 is configured to acquire an image to be recognized, where the image to be recognized includes more than one face image;
a face capturing module 402, configured to capture each face image included in the image to be recognized;
the image quality detection module 403 is configured to input each captured face image into a pre-constructed image quality detection model respectively, so as to obtain an image quality score of each face image, where the image quality detection model is a neural network model trained by using a plurality of sample face images labeled with image quality scores in advance as a training set;
a face filtering module 404, configured to filter, from the face images, face images whose image quality scores are smaller than a preset threshold, where the remaining face images are used as target face images to be recognized;
a face attribute detection module 405, configured to input the target face image into a pre-constructed face attribute detection model to obtain a face attribute feature of the target face image, where the face attribute detection model is a neural network model obtained by training a plurality of sample face images with known face attribute features as a training set;
a face feature extraction module 406, configured to extract a face feature of the target face image;
a feature fusion module 407, configured to fuse the face features and the face attribute features of the target face image to obtain multidimensional features of the target face image;
the face matching module 408 is configured to match the multidimensional features of the target face image with the multidimensional features of each face image with a known identity in a preset image database, so as to obtain a face recognition result of the target face image.
Further, the face attribute detection model includes a face gender detection model, a face age detection model and a face race detection model, and the face attribute detection module may include:
a face image input unit, configured to input the target face image into the face gender detection model, the face age detection model, and the face race detection model, so as to obtain a face gender feature, a face age feature, and a face race feature of the target face image, respectively;
and the attribute feature fusion unit is used for fusing the face gender feature, the face age feature and the face ethnicity feature to obtain the face attribute feature of the target face image.
Further, the attribute feature fusing unit may include:
a normalization subunit, configured to perform L1 normalization operation on the face gender feature, the face age feature, and the face ethnicity feature;
and the attribute feature fusion subunit is used for fusing the normalized face gender feature, the normalized face age feature and the normalized face ethnicity feature.
Still further, the attribute feature fusion subunit may include:
a feature fusion mode determining unit, configured to input the normalized face gender feature, the normalized face age feature, and the normalized face ethnicity feature into a pre-constructed attribute feature selection model, and determine a target feature fusion mode according to an output result of the attribute feature selection model;
the target attribute feature selection grandchild unit is used for selecting and fusing target attribute features from the normalized human face gender features, the normalized human face age features and the normalized human face ethnicity features according to the target feature fusion mode;
the attribute feature selection model is constructed by the following steps:
acquiring a plurality of model sample face images of known identities of known face gender characteristics, face age characteristics and face ethnicity characteristics;
constructing a plurality of feature fusion modes by using the face gender feature, the face age feature and the face ethnicity feature of the model sample face image according to a traversal mode, wherein each feature fusion mode comprises more than one face attribute feature of the face gender feature, the face age feature and the face ethnicity feature;
respectively fusing the human face attribute characteristics contained in each characteristic fusion mode and the human face characteristics of the model sample human face image, then identifying the human face, and counting the identification accuracy rate corresponding to each characteristic fusion mode;
associating the face gender characteristics, the face age characteristics and the face ethnicity characteristics of the model sample image with the characteristic fusion mode with the highest recognition accuracy;
and training to obtain the attribute feature selection model by taking the model sample image in a face attribute feature associated feature fusion mode as a training set.
Further, the face recognition apparatus may further include:
and the face feature adjusting module is used for adjusting the face features of the target face image according to the target attribute features.
Further, the face feature adjustment module may include:
a feature region determination unit, configured to determine a face feature region associated with the target attribute feature;
and the characteristic proportion adjusting unit is used for improving the proportion of the human face characteristics in the human face characteristic region in the target human face image.
Further, the face intercepting module may include:
the face detection unit is used for carrying out face detection on the image to be recognized by adopting a face detection algorithm;
the position coordinate determination unit is used for determining the position coordinates of each face image contained in the image to be recognized according to the result of the face detection;
and the face intercepting unit is used for intercepting the image of the area where the position coordinates are located from the image to be identified to obtain each face image.
An embodiment of the present application further provides a computer-readable storage medium, which stores computer-readable instructions, and the computer-readable instructions, when executed by a processor, implement the steps of any one of the face recognition methods shown in fig. 1 to 3.
An embodiment of the present application further provides a server, which includes a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, where the processor executes the computer readable instructions to implement the steps of any one of the face recognition methods shown in fig. 1 to 3.
Embodiments of the present application further provide a computer program product, which when running on a server, causes the server to execute the steps of implementing any one of the face recognition methods shown in fig. 1 to 3.
Fig. 5 is a schematic diagram of a server according to an embodiment of the present application. As shown in fig. 5, the server 5 of this embodiment includes: a processor 50, a memory 51, and computer readable instructions 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer readable instructions 52, implements the steps in the various embodiments of the face recognition method described above, such as the steps 101 to 108 shown in fig. 1. Alternatively, the processor 50, when executing the computer readable instructions 52, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 401 to 408 shown in fig. 4.
Illustratively, the computer readable instructions 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer-readable instruction segments capable of performing specific functions, which are used to describe the execution of the computer-readable instructions 52 in the server 5.
The server 5 may be a computing device such as a smart phone, a notebook, a palm computer, and a cloud server. The server 5 may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a server 5 and does not constitute a limitation of the server 5 and may include more or fewer components than shown, or some components in combination, or different components, e.g., the server 5 may also include input output devices, network access devices, buses, etc.
The Processor 50 may be a CentraL Processing Unit (CPU), other general purpose Processor, a DigitaL SignaL Processor (DSP), an AppLication Specific Integrated Circuit (ASIC), an off-the-shelf ProgrammabLe Gate Array (FPGA) or other ProgrammabLe logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 51 may be an internal storage unit of the server 5, such as a hard disk or a memory of the server 5. The memory 51 may also be an external storage device of the server 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure DigitaL (SD) Card, a FLash memory Card (FLash Card), or the like, provided on the server 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the server 5. The memory 51 is used to store the computer readable instructions and other programs and data required by the server. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition method, comprising:
acquiring an image to be recognized, wherein the image to be recognized comprises more than one face image;
intercepting each face image contained in the image to be recognized;
inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain the image quality score of each face image, wherein the image quality detection model is a neural network model obtained by training a plurality of sample face images with pre-labeled image quality scores as a training set;
filtering the face images with the image quality scores smaller than a preset threshold value in all the face images, and taking the rest face images as target face images to be recognized;
inputting the target face image into a face attribute detection model which is constructed in advance to obtain the face attribute characteristics of the target face image, wherein the face attribute detection model is a neural network model which is obtained by training a plurality of sample face images with known face attribute characteristics as a training set;
extracting the face features of the target face image;
fusing the face features and face attribute features of the target face image to obtain multi-dimensional features of the target face image;
and respectively matching the multidimensional characteristics of the target face image with the multidimensional characteristics of each face image with known identity in a preset image database to obtain a face recognition result of the target face image.
2. The face recognition method of claim 1, wherein the face attribute detection model comprises a face gender detection model, a face age detection model and a face race detection model, and the inputting the target face image into a face attribute detection model constructed in advance to obtain the face attribute characteristics of the target face image comprises:
inputting the target face image into the face gender detection model, the face age detection model and the face race detection model to respectively obtain a face gender feature, a face age feature and a face race feature of the target face image;
and fusing the face gender characteristic, the face age characteristic and the face ethnicity characteristic to obtain the face attribute characteristic of the target face image.
3. The face recognition method of claim 2, wherein the fusing the face gender feature, the face age feature, and the face ethnicity feature comprises:
performing an L1 normalization operation on the face gender feature, the face age feature and the face ethnicity feature;
and fusing the normalized face gender characteristic, the normalized face age characteristic and the normalized face ethnicity characteristic.
4. The face recognition method of claim 3, wherein the fusing the normalized face gender feature, the face age feature, and the face ethnicity feature comprises:
inputting the normalized face gender feature, the normalized face age feature and the normalized face ethnicity feature into a pre-constructed attribute feature selection model, and determining a target feature fusion mode according to an output result of the attribute feature selection model;
selecting and fusing target attribute features from the normalized human face gender features, the normalized human face age features and the normalized human face ethnicity features according to the target feature fusion mode;
the attribute feature selection model is constructed by the following steps:
acquiring a plurality of model sample face images of known identities of known face gender characteristics, face age characteristics and face ethnicity characteristics;
constructing a plurality of feature fusion modes by using the face gender feature, the face age feature and the face ethnicity feature of the model sample face image according to a traversal mode, wherein each feature fusion mode comprises more than one face attribute feature of the face gender feature, the face age feature and the face ethnicity feature;
respectively fusing the human face attribute characteristics contained in each characteristic fusion mode and the human face characteristics of the model sample human face image, then identifying the human face, and counting the identification accuracy rate corresponding to each characteristic fusion mode;
associating the face gender characteristics, the face age characteristics and the face ethnicity characteristics of the model sample image with the characteristic fusion mode with the highest recognition accuracy;
and training to obtain the attribute feature selection model by taking the model sample image in a face attribute feature associated feature fusion mode as a training set.
5. The face recognition method of claim 4, wherein after extracting the face features of the target face image, before fusing the face features and the face attribute features of the target face image, the method further comprises:
and adjusting the face characteristics of the target face image according to the target attribute characteristics.
6. The face recognition method of claim 5, wherein the adjusting the face features of the target face image according to the target attribute features comprises:
determining a face feature region associated with the target attribute feature;
and improving the proportion of the human face features in the human face feature region in the target human face image.
7. The face recognition method according to any one of claims 1 to 6, wherein the intercepting of each face image included in the image to be recognized comprises:
carrying out face detection on the image to be recognized by adopting a face detection algorithm;
determining the position coordinates of each face image contained in the image to be recognized according to the face detection result;
and intercepting the image of the area where the position coordinates are located from the image to be identified to obtain each face image.
8. A face recognition apparatus, comprising:
the image acquisition module is used for acquiring an image to be recognized, wherein the image to be recognized comprises more than one face image;
the face intercepting module is used for intercepting each face image contained in the image to be recognized;
the image quality detection module is used for respectively inputting each intercepted human face image into a pre-constructed image quality detection model to obtain the image quality score of each human face image, and the image quality detection model is a neural network model obtained by training a plurality of sample human face images with pre-labeled image quality scores as a training set;
the face filtering module is used for filtering the face images of which the image quality scores are smaller than a preset threshold value in all the face images, and the rest face images are used as target face images to be recognized;
the human face attribute detection module is used for inputting the target human face image into a human face attribute detection model which is constructed in advance to obtain the human face attribute characteristics of the target human face image, and the human face attribute detection model is a neural network model which is obtained by taking a plurality of sample human face images with known human face attribute characteristics as a training set for training;
the face feature extraction module is used for extracting the face features of the target face image;
the characteristic fusion module is used for fusing the face characteristic and the face attribute characteristic of the target face image to obtain the multi-dimensional characteristic of the target face image;
and the face matching module is used for respectively matching the multi-dimensional features of the target face image with the multi-dimensional features of each face image with known identity in a preset image database to obtain a face recognition result of the target face image.
9. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out a face recognition method according to any one of claims 1 to 7.
10. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the face recognition method according to any one of claims 1 to 7 when executing the computer program.
CN201911042888.4A 2019-10-30 2019-10-30 Face recognition method, device, storage medium and server Active CN110866466B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911042888.4A CN110866466B (en) 2019-10-30 2019-10-30 Face recognition method, device, storage medium and server
PCT/CN2019/118567 WO2021082087A1 (en) 2019-10-30 2019-11-14 Facial recognition method and device, storage medium and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911042888.4A CN110866466B (en) 2019-10-30 2019-10-30 Face recognition method, device, storage medium and server

Publications (2)

Publication Number Publication Date
CN110866466A true CN110866466A (en) 2020-03-06
CN110866466B CN110866466B (en) 2023-12-26

Family

ID=69654200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911042888.4A Active CN110866466B (en) 2019-10-30 2019-10-30 Face recognition method, device, storage medium and server

Country Status (2)

Country Link
CN (1) CN110866466B (en)
WO (1) WO2021082087A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666976A (en) * 2020-05-08 2020-09-15 深圳力维智联技术有限公司 Feature fusion method and device based on attribute information and storage medium
CN111723762A (en) * 2020-06-28 2020-09-29 湖南国科微电子股份有限公司 Face attribute recognition method and device, electronic equipment and storage medium
CN112257581A (en) * 2020-10-21 2021-01-22 广州云从凯风科技有限公司 Face detection method, device, medium and equipment
CN112396016A (en) * 2020-11-26 2021-02-23 武汉宏数信息技术有限责任公司 Face recognition system based on big data technology
WO2021139171A1 (en) * 2020-07-28 2021-07-15 平安科技(深圳)有限公司 Facial enhancement based recognition method, apparatus and device, and storage medium
CN113420616A (en) * 2021-06-03 2021-09-21 青岛海信智慧生活科技股份有限公司 Face recognition method, device, equipment and medium
CN113591718A (en) * 2021-07-30 2021-11-02 北京百度网讯科技有限公司 Target object identification method and device, electronic equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344916A (en) * 2021-07-21 2021-09-03 上海媒智科技有限公司 Method, system, terminal, medium and application for acquiring machine learning model capability
CN114863540B (en) * 2022-07-05 2022-12-16 杭州魔点科技有限公司 Face attribute analysis-based face recognition online auxiliary method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
WO2017107957A1 (en) * 2015-12-22 2017-06-29 中兴通讯股份有限公司 Human face image retrieval method and apparatus
CN107563336A (en) * 2017-09-07 2018-01-09 廖海斌 Human face similarity degree analysis method, the device and system of game are matched for famous person
CN107844781A (en) * 2017-11-28 2018-03-27 腾讯科技(深圳)有限公司 Face character recognition methods and device, electronic equipment and storage medium
WO2019051814A1 (en) * 2017-09-15 2019-03-21 达闼科技(北京)有限公司 Target recognition method and apparatus, and intelligent terminal
CN109508654A (en) * 2018-10-26 2019-03-22 中国地质大学(武汉) Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks
CN109522824A (en) * 2018-10-30 2019-03-26 平安科技(深圳)有限公司 Face character recognition methods, device, computer installation and storage medium
CN110263621A (en) * 2019-05-06 2019-09-20 北京迈格威科技有限公司 Image-recognizing method, device and readable storage medium storing program for executing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408335A (en) * 2016-09-05 2017-02-15 江苏宾奥机器人科技有限公司 Internet of Things-based mobile robot information push system
CN106650653B (en) * 2016-12-14 2020-09-15 广东顺德中山大学卡内基梅隆大学国际联合研究院 Construction method of human face recognition and age synthesis combined model based on deep learning
CN107742107B (en) * 2017-10-20 2019-03-01 北京达佳互联信息技术有限公司 Facial image classification method, device and server

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107957A1 (en) * 2015-12-22 2017-06-29 中兴通讯股份有限公司 Human face image retrieval method and apparatus
CN106815566A (en) * 2016-12-29 2017-06-09 天津中科智能识别产业技术研究院有限公司 A kind of face retrieval method based on multitask convolutional neural networks
CN107563336A (en) * 2017-09-07 2018-01-09 廖海斌 Human face similarity degree analysis method, the device and system of game are matched for famous person
WO2019051814A1 (en) * 2017-09-15 2019-03-21 达闼科技(北京)有限公司 Target recognition method and apparatus, and intelligent terminal
CN107844781A (en) * 2017-11-28 2018-03-27 腾讯科技(深圳)有限公司 Face character recognition methods and device, electronic equipment and storage medium
CN109508654A (en) * 2018-10-26 2019-03-22 中国地质大学(武汉) Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks
CN109522824A (en) * 2018-10-30 2019-03-26 平安科技(深圳)有限公司 Face character recognition methods, device, computer installation and storage medium
CN110263621A (en) * 2019-05-06 2019-09-20 北京迈格威科技有限公司 Image-recognizing method, device and readable storage medium storing program for executing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FARIBORZ TAHERKHANI等: ""A Deep Face Identification Network Enhanced by Facial Attributes Prediction"", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》, pages 666 - 671 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666976A (en) * 2020-05-08 2020-09-15 深圳力维智联技术有限公司 Feature fusion method and device based on attribute information and storage medium
CN111666976B (en) * 2020-05-08 2023-07-28 深圳力维智联技术有限公司 Feature fusion method, device and storage medium based on attribute information
CN111723762A (en) * 2020-06-28 2020-09-29 湖南国科微电子股份有限公司 Face attribute recognition method and device, electronic equipment and storage medium
WO2021139171A1 (en) * 2020-07-28 2021-07-15 平安科技(深圳)有限公司 Facial enhancement based recognition method, apparatus and device, and storage medium
CN112257581A (en) * 2020-10-21 2021-01-22 广州云从凯风科技有限公司 Face detection method, device, medium and equipment
CN112396016A (en) * 2020-11-26 2021-02-23 武汉宏数信息技术有限责任公司 Face recognition system based on big data technology
CN112396016B (en) * 2020-11-26 2021-07-23 武汉宏数信息技术有限责任公司 Face recognition system based on big data technology
CN113420616A (en) * 2021-06-03 2021-09-21 青岛海信智慧生活科技股份有限公司 Face recognition method, device, equipment and medium
CN113420616B (en) * 2021-06-03 2023-04-07 青岛海信智慧生活科技股份有限公司 Face recognition method, device, equipment and medium
CN113591718A (en) * 2021-07-30 2021-11-02 北京百度网讯科技有限公司 Target object identification method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2021082087A1 (en) 2021-05-06
CN110866466B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN110866466B (en) Face recognition method, device, storage medium and server
CN107423690B (en) Face recognition method and device
CN109145742B (en) Pedestrian identification method and system
JP5801601B2 (en) Image recognition apparatus, image recognition apparatus control method, and program
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
US11804071B2 (en) Method for selecting images in video of faces in the wild
WO2020244071A1 (en) Neural network-based gesture recognition method and apparatus, storage medium, and device
CN111861240A (en) Suspicious user identification method, device, equipment and readable storage medium
CN111626371A (en) Image classification method, device and equipment and readable storage medium
CN108108711B (en) Face control method, electronic device and storage medium
CN111079816A (en) Image auditing method and device and server
CN109815823B (en) Data processing method and related product
CN112101200A (en) Human face anti-recognition method, system, computer equipment and readable storage medium
El-Abed et al. Quality assessment of image-based biometric information
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
JP2013218605A (en) Image recognition device, image recognition method, and program
Baltanas et al. A face recognition system for assistive robots
CN108875472B (en) Image acquisition device and face identity verification method based on image acquisition device
CN108694347B (en) Image processing method and device
CN113837174A (en) Target object identification method and device and computer equipment
CN113158788B (en) Facial expression recognition method and device, terminal equipment and storage medium
US20240127631A1 (en) Liveness detection method and apparatus, and computer device
CN113642428B (en) Face living body detection method and device, electronic equipment and storage medium
WO2023109551A1 (en) Living body detection method and apparatus, and computer device
CN113449543B (en) Video detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018211

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant