CN110866466B - Face recognition method, device, storage medium and server - Google Patents
Face recognition method, device, storage medium and server Download PDFInfo
- Publication number
- CN110866466B CN110866466B CN201911042888.4A CN201911042888A CN110866466B CN 110866466 B CN110866466 B CN 110866466B CN 201911042888 A CN201911042888 A CN 201911042888A CN 110866466 B CN110866466 B CN 110866466B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- feature
- attribute
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000001514 detection method Methods 0.000 claims description 102
- 230000004927 fusion Effects 0.000 claims description 59
- 238000004590 computer program Methods 0.000 claims description 15
- 238000003062 neural network model Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 230000002349 favourable effect Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
The application is applicable to the technical field of computers and provides a face recognition method, a face recognition device, a storage medium and a server. According to the face recognition method, on one hand, face images with the image quality scores smaller than the preset threshold value in all face images of the image to be recognized are filtered, namely, the part of face images which are easy to be recognized by mistake due to poor image quality are filtered, so that the false recognition rate of face recognition can be effectively reduced. On the other hand, when the characteristics of the face images are matched, the multi-dimensional characteristics fused with the face characteristics and the face attribute characteristics are adopted for matching, and compared with the traditional mode of matching only by adopting the face characteristics, the accuracy of face recognition is greatly improved.
Description
Technical Field
The application belongs to the technical field of computers, and particularly relates to a face recognition method, a device, a storage medium and a server.
Background
The identity of the person can be determined by extracting and identifying the characteristics of the face, so that the face recognition method has great application in industries such as security, finance and the like, and can bring better security guarantee to enterprises. However, the conventional face recognition system has poor robustness, and factors of image quality such as illumination, angle, ambiguity, age-aging and the like in a face image also influence the face recognition effect, so that a relatively high face false recognition rate is caused.
Disclosure of Invention
In view of the above, the application provides a face recognition method, which can effectively improve the accuracy of face recognition and reduce the false recognition rate.
In a first aspect, an embodiment of the present application provides a face recognition method, including:
acquiring an image to be identified, wherein the image to be identified comprises more than one face image;
intercepting out each face image contained in the image to be identified;
inputting each intercepted face image into a pre-constructed image quality detection model to obtain an image quality score of each face image, wherein the image quality detection model is a neural network model trained by a plurality of sample face images with pre-labeled image quality scores as a training set;
filtering face images with the image quality scores smaller than a preset threshold value in the face images, and taking the rest face images as target face images to be identified;
inputting the target face image into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face image, wherein the face attribute detection model is a neural network model trained by a plurality of sample face images with known face attribute characteristics as a training set;
Extracting face features of the target face image;
fusing the face features and the face attribute features of the target face image to obtain multi-dimensional features of the target face image;
and respectively matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of the face images with known identities in a preset image database to obtain the face recognition result of the target face image.
According to the face recognition method provided by the embodiment of the application, on one hand, face images with the image quality scores smaller than the preset threshold value in all face images of the image to be recognized are filtered, namely, the part of face images which are easy to cause false recognition due to poor image quality are filtered, so that the false recognition rate of face recognition can be effectively reduced. On the other hand, when the characteristics of the face images are matched, the multi-dimensional characteristics fused with the face characteristics and the face attribute characteristics are adopted for matching, and compared with the traditional mode of matching only by adopting the face characteristics, the accuracy of face recognition is greatly improved.
Further, the face attribute detection model includes a face gender detection model, a face age detection model and a face race detection model, and the inputting the target face image into the pre-constructed face attribute detection model to obtain the face attribute feature of the target face image may include:
Inputting the target face image into the face gender detection model, the face age detection model and the face race detection model to respectively obtain face gender characteristics, face age characteristics and face race characteristics of the target face image;
and fusing the sex characteristics of the human face, the age characteristics of the human face and the race characteristics of the human face to obtain the attribute characteristics of the human face of the target human face image.
When the face attribute characteristics are obtained, a face gender detection model, a face age detection model and 3 models of a face race detection model can be constructed, the face gender characteristics, the face age characteristics and the face race characteristics of the target face image are detected respectively, and then the detected 3 attribute characteristics are fused to be used as final face attribute characteristics.
Further, the fusing the face gender feature, the face age feature, and the face race feature may include:
performing an L1 normalization operation on the face gender feature, the face age feature, and the face ethnicity feature;
and fusing the normalized face gender characteristics, the face age characteristics and the face race characteristics.
Because the 3 types of characteristics of the gender, the age and the race of the face have relatively large differences, if normalization is not performed, the 3 types of characteristics are fused to cause the deterioration of the effect of the subsequent face recognition. That is, the 3 types of features are normalized and then fused, which is beneficial to improving the accuracy of the subsequent face recognition.
Further, the fusing the normalized face gender feature, the face age feature and the face race feature may include:
inputting the normalized human face gender characteristics, the human face age characteristics and the human face race characteristics into a pre-constructed attribute characteristic selection model, and determining a target characteristic fusion mode through an output result of the attribute characteristic selection model;
selecting and fusing target attribute features from the normalized face gender features, the face age features and the face race features according to the target feature fusion mode;
the attribute feature selection model is constructed through the following steps:
acquiring a plurality of model sample face images with known faces, such as gender characteristics, age characteristics and race characteristics;
Constructing a plurality of feature fusion modes according to the human face gender feature, the human face age feature and the human face race feature of the model sample human face image, wherein each feature fusion mode comprises more than one human face attribute feature of the human face gender feature, the human face age feature and the human face race feature;
respectively fusing the human face attribute features contained in each feature fusion mode with the human face features of the model sample human face image, then carrying out human face recognition, and counting the recognition accuracy corresponding to each feature fusion mode;
correlating the face gender characteristic, the face age characteristic and the face race characteristic of the model sample image with the characteristic fusion mode with the highest recognition accuracy;
and training to obtain the attribute feature selection model by taking the model sample image in a mode of combining the associated features of the face attribute features as a training set.
In order to further improve the accuracy of face recognition, an attribute feature selection model can be constructed in advance, and the model is used for selecting attribute features favorable for face recognition, namely automatically selecting and fusing all attribute features, so as to obtain the attribute features of the face which are most favorable for face recognition. The model can match the input normalized face gender, face age and face race with the face gender, face age and face race of each sample image, find out one sample image with the highest similarity, output the best fusion mode corresponding to the sample image, thereby determining the selected target attribute characteristics and fusing.
Further, after extracting the face feature of the target face image, before fusing the face feature of the target face image and the face attribute feature, the method may further include:
and adjusting the face characteristics of the target face image according to the target attribute characteristics.
Specific adjustment modes can include:
determining a face feature area associated with the target attribute feature;
and improving the proportion of the face features in the face feature region in the target face image.
In order to improve the accuracy of face recognition, the face features of the target face image can be adjusted according to the selected target attribute features. For example, if the target attribute feature is a male, the feature weight of the male part in the face feature is increased, for example, the weight of the beard part feature in the face feature is increased.
Specifically, the capturing the face images included in the image to be identified may include:
performing face detection on the image to be identified by adopting a face detection algorithm;
determining the position coordinates of each face image contained in the image to be recognized according to the face detection result;
And intercepting the image of the area where the position coordinates are located from the image to be identified to obtain each face image.
The face detection algorithm is adopted to detect the image to be identified, the coordinates of the face in the image are obtained, then the face crop operation is carried out, the intercepted face image is obtained, and the purpose of the operation is to remove the background information of the face image and reduce noise interference.
In a second aspect, an embodiment of the present application provides a face recognition device, including:
the image acquisition module is used for acquiring an image to be identified, wherein the image to be identified comprises more than one face image;
the face intercepting module is used for intercepting out each face image contained in the image to be identified;
the image quality detection module is used for respectively inputting each intercepted face image into a pre-constructed image quality detection model to obtain the image quality score of each face image, and the image quality detection model is a neural network model which is trained by taking a plurality of sample face images with pre-labeled image quality scores as a training set;
the face filtering module is used for filtering face images with the image quality scores smaller than a preset threshold value in the face images, and the rest face images are used as target face images to be identified;
The face attribute detection module is used for inputting the target face image into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face image, wherein the face attribute detection model is a neural network model obtained by training a plurality of sample face images with known face attribute characteristics as a training set;
the face feature extraction module is used for extracting face features of the target face image;
the feature fusion module is used for fusing the face features and the face attribute features of the target face image to obtain multi-dimensional features of the target face image;
and the face matching module is used for respectively matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of the face images with known identities in a preset image database to obtain the face recognition result of the target face image.
In a third aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, which when executed by a processor implements a face recognition method as set forth in the first aspect of embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the face recognition method as set forth in the first aspect of the embodiment of the present application when the processor executes the computer program.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a terminal device, causes the terminal device to perform the face recognition method according to the first aspect.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a first embodiment of a face recognition method provided in an embodiment of the present application;
fig. 2 is a flowchart of a second embodiment of a face recognition method provided in an embodiment of the present application;
fig. 3 is a flowchart of a third embodiment of a face recognition method provided in an embodiment of the present application;
fig. 4 is a block diagram of one embodiment of a face recognition device provided in an embodiment of the present application;
Fig. 5 is a schematic diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail. In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
The face recognition method can effectively improve the accuracy of face recognition and reduce the false recognition rate.
It should be understood that the implementation subject of the face recognition method proposed in the embodiments of the present application is a server.
Referring to fig. 1, a first embodiment of a face recognition method in an embodiment of the present application includes:
101. acquiring an image to be identified, wherein the image to be identified comprises more than one face image;
Firstly, an image to be identified is obtained, wherein the image to be identified comprises more than one face image. The camera can be used for shooting an image of a certain designated area, for example, shooting an image of each person waiting for transacting business in a bank hall, and the image can be used as the image to be identified.
102. Intercepting out each face image contained in the image to be identified;
after the image to be identified is obtained, each face image contained in the image to be identified is intercepted. Specifically, various face detection algorithms can be adopted to detect faces from the images to be recognized and intercept the detected face images, so that background information of non-face areas in the images can be removed, and noise interference of follow-up face recognition is reduced.
103. Inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain an image quality score of each face image;
and then, inputting each intercepted face image into a pre-constructed image quality detection model to obtain the image quality score of each face image, wherein the image quality detection model is a neural network model trained by using a plurality of sample face images with pre-labeled image quality scores (such as 0-100) as training sets. The model outputs a score of the quality of the face image by detecting the image quality parameters such as the exposure rate, the darkness rate, the shielding degree, the large deflection angle, the ambiguity and the like of the face image, wherein the higher the score is, the better the quality of the image is. Specifically, the model matches the image quality parameters of the face image of the input model with the image quality parameters of each sample face image to obtain similarity, finds the sample face image with the highest similarity, and determines the image quality score of the face image of the input model according to the image quality score corresponding to the sample face image.
104. Filtering face images with the image quality scores smaller than a preset threshold value in the face images, and taking the rest face images as target face images to be identified;
and then filtering out the face images with the image quality scores smaller than a preset threshold (such as 60) in the intercepted face images, wherein the face images are poor in image quality and easy to be identified by mistake, so that the face images are filtered out. After filtering, the rest is a face image with better image quality, which is used as a target face image for subsequent processing. Specifically, the threshold value can be set according to the requirement of the current application occasion on the accuracy of the face image, for example, if the accuracy of the face recognition is required to be high and false recognition is not generated, a higher threshold value can be set.
105. Inputting the target face image into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face image;
after selecting a part of target face images with better image quality, inputting the target face images into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face images. The face attribute features are features for representing related attributes of a face, such as age, gender, race, expression, and the like. Specifically, the face attribute detection model is a neural network model obtained by training a plurality of sample face images with known face attribute characteristics as a training set, such as a deep neural network model for detecting the gender of the face, a face image is input, and the gender of the face image is output; or a deep neural network model for detecting the age of the human face, inputting a human face image and outputting the age of the human face image. In addition, the face attribute detection model may also include a plurality of sub-models, each of which detects different face attributes, so as to obtain a plurality of different face attribute features of the target face image.
106. Extracting face features of the target face image;
next, the face features of the target face image are extracted, and the face features may be features of the five sense organs, hair, or accessories of the person.
107. Fusing the face features and the face attribute features of the target face image to obtain multi-dimensional features of the target face image;
then, the face features and the face attribute features of the target face image are fused to obtain the multi-dimensional features of the target face image, namely the features containing multiple dimensions of the face features and the face attribute features. In terms of the expression form of the data, assuming that the extracted face features are 128-dimensional data and the face attribute features are 3-dimensional (including gender 1-dimensional, age 1-dimensional and race 1-dimensional) data, 131-dimensional data, namely multi-dimensional features, are obtained after the face features and the face attribute features are fused.
108. And respectively matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of the face images with known identities in a preset image database to obtain the face recognition result of the target face image.
And finally, respectively matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of each face image with known identity in a preset image database to obtain the face recognition result of the target face image, namely, the identity of each target face image is recognized. Compared with the traditional mode of only adopting the face characteristics to compare, the accuracy of face recognition can be further improved by adopting the multidimensional characteristics to compare.
An actual application scene of the embodiment of the application is the identification of personnel in a bank hall, an image of the bank hall is shot through a camera to serve as an image to be identified, and the image to be identified comprises a plurality of face images; the face attribute features of the face images are obtained and fused with the face features to obtain multi-dimensional features, and then the multi-dimensional features are compared with the multi-dimensional features of the faces of clients stored in a bank database system, so that identities corresponding to the face images are identified. Because the bank has high requirements on the accuracy of the identity recognition, the bank can not recognize the face image and can not generate false recognition, and therefore, the face image with poor image quality can be filtered in the processing process.
According to the face recognition method provided by the embodiment of the application, on one hand, face images with the image quality scores smaller than the preset threshold value in all face images of the image to be recognized are filtered, namely, the part of face images which are easy to cause false recognition due to poor image quality are filtered, so that the false recognition rate of face recognition can be effectively reduced. On the other hand, when the characteristics of the face images are matched, the multi-dimensional characteristics fused with the face characteristics and the face attribute characteristics are adopted for matching, and compared with the traditional mode of matching only by adopting the face characteristics, the accuracy of face recognition is greatly improved.
Referring to fig. 2, a second embodiment of a face recognition method in the embodiments of the present application includes:
201. acquiring an image to be identified, wherein the image to be identified comprises more than one face image;
202. intercepting out each face image contained in the image to be identified;
203. inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain an image quality score of each face image;
204. filtering face images with the image quality scores smaller than a preset threshold value in the face images, and taking the rest face images as target face images to be identified;
steps 201-204 are identical to steps 101-104, and reference is made specifically to the description of steps 101-104.
205. Inputting the target face image into a face gender detection model, a face age detection model and a face race detection model to respectively obtain face gender characteristics, face age characteristics and face race characteristics of the target face image;
in the embodiment of the application, 3 deep neural network models of a face gender detection model, a face age detection model and a face race detection model are pre-built and are respectively used for detecting the face gender characteristic, the face age characteristic and the face race characteristic of the target face image. The construction modes and working principles of the face gender detection model, the face age detection model and the face race detection model can be referred to in the prior art.
206. Fusing the sex characteristics of the human face, the age characteristics of the human face and the race characteristics of the human face to obtain the attribute characteristics of the human face of the target human face image;
and then, fusing the sex characteristic, the age characteristic and the race characteristic of the face to obtain the face attribute characteristic of the target face image. The data fusion refers to combining a plurality of different data into a multi-dimensional data, for example, carrying out data fusion on the gender (1 dimension), the age (1 dimension) and the race (1 dimension) of the face to obtain the attribute characteristics (3 dimensions) of the face.
Specifically, step 206 may include:
(1) Performing an L1 normalization operation on the face gender feature, the face age feature, and the face ethnicity feature;
(2) And fusing the normalized face gender characteristics, the face age characteristics and the face race characteristics.
Because the 3 types of characteristics of the gender, the age and the race of the face have relatively large differences, if normalization is not performed, the 3 types of characteristics are fused to cause the deterioration of the effect of the subsequent face recognition. That is, the 3 types of features are normalized and then fused, which is beneficial to improving the accuracy of the subsequent face recognition.
Further, the step (2) may include:
(2.1) inputting the normalized human face gender characteristics, the human face age characteristics and the human face race characteristics into a pre-constructed attribute characteristic selection model, and determining a target characteristic fusion mode through an output result of the attribute characteristic selection model;
and (2.2) selecting and fusing target attribute features from the normalized face gender features, the face age features and the face race features according to the target feature fusion mode.
The attribute feature selection model is constructed through the following steps:
(a) Acquiring a plurality of model sample face images with known faces, such as gender characteristics, age characteristics and race characteristics;
(b) Constructing a plurality of feature fusion modes according to the human face gender feature, the human face age feature and the human face race feature of the model sample human face image, wherein each feature fusion mode comprises more than one human face attribute feature of the human face gender feature, the human face age feature and the human face race feature;
(c) Respectively fusing the human face attribute features contained in each feature fusion mode with the human face features of the model sample human face image, then carrying out human face recognition, and counting the recognition accuracy corresponding to each feature fusion mode;
(d) Correlating the face gender characteristic, the face age characteristic and the face race characteristic of the model sample image with the characteristic fusion mode with the highest recognition accuracy;
(e) And training to obtain the attribute feature selection model by taking the model sample image in a mode of combining the associated features of the face attribute features as a training set.
In order to further improve the accuracy of face recognition, an attribute feature selection model can be constructed in advance, and the model is used for selecting attribute features favorable for face recognition, namely automatically selecting and fusing all attribute features, so as to obtain the attribute features of the face which are most favorable for face recognition. Specifically, a large number of face sample images with known face gender, face age and face race and same identity can be collected in advance, a certain number of attribute features are selected for fusion (such as face gender+face age, face gender+face race …, and the like) according to a permutation and combination mode in the process of identifying the sample images, face recognition accuracy of the sample images in each feature fusion mode is counted respectively, a feature fusion mode with highest face recognition accuracy is found, and the feature fusion mode is associated with the face gender, face age and face race values before the sample images are not fused. Then, the attribute feature selection model is obtained by training the face gender, the face age, the face race and the corresponding feature fusion mode of the face sample images as a training set. The model can match the input normalized face gender, face age and face race with the face gender, face age and face race of each sample image, find out one sample image with the highest similarity, output the best feature fusion mode corresponding to the sample image, thereby determining and fusing the selected target attribute features.
207. Extracting face features of the target face image;
208. fusing the face features and the face attribute features of the target face image to obtain multi-dimensional features of the target face image;
further, after extracting the face feature of the target face image, before fusing the face feature of the target face image and the face attribute feature, the method may further include:
and adjusting the face characteristics of the target face image according to the target attribute characteristics.
When the face attribute feature fusion is performed in step 206, if the fused target attribute feature is determined by the attribute feature selection model, in order to further improve the accuracy of face recognition, the face feature of the target face image may be adjusted according to the target attribute feature, where a specific adjustment manner may be: determining a face feature area associated with the target attribute feature; and improving the proportion of the face features in the face feature region in the target face image. For example, if the target attribute feature is a male, the feature weight of the male part in the face feature is increased, for example, the weight of the beard part feature in the face feature is increased. By improving the specific gravity of certain specific features, the specific features can be compared in the follow-up face feature matching process, so that the interference of other non-key face features is relatively weakened, and the face recognition accuracy is further improved.
209. And respectively matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of the face images with known identities in a preset image database to obtain the face recognition result of the target face image.
Steps 207-209 are identical to steps 106-108 and reference is made specifically to the relevant description of steps 106-108.
The embodiment of the application inputs the target face image into a face gender detection model, a face age detection model and a face race detection model to respectively obtain the face gender characteristic, the face age characteristic and the face race characteristic of the target face image; and then fusing the sex characteristic of the human face, the age characteristic of the human face and the race characteristic of the human face to obtain the attribute characteristic of the human face of the target human face image. In addition, when the attribute characteristics of the face are fused, the attribute characteristic selection model can be used for selecting the optimal target attribute characteristics from the normalized gender characteristics, the face age characteristics and the face race characteristics for fusion, so that the accuracy of the face recognition of the follow-up execution can be effectively improved.
Referring to fig. 3, a third embodiment of a face recognition method in the embodiments of the present application includes:
301. Acquiring an image to be identified, wherein the image to be identified comprises more than one face image;
step 301 is the same as step 101, and reference is specifically made to the description related to step 101.
302. Performing face detection on the image to be identified by adopting a face detection algorithm;
after the image to be identified is acquired, a face detection algorithm is adopted to carry out face detection on the image to be identified. Specifically, existing face detection algorithms of various types can be adopted to determine whether the image to be recognized has a face or not, and the position of each face in the image.
303. Determining the position coordinates of each face image contained in the image to be recognized according to the face detection result;
after the face detection result is obtained, each face image in the image to be identified can be positioned, namely, the position coordinate of each face image is obtained.
304. Intercepting an image of the area where the position coordinates are located from the image to be identified to obtain each face image;
and then, the image of the area where the position coordinates are located is intercepted from the image to be identified, and each face image is obtained. That is, the face crop operation is performed to obtain the truncated face image, which aims to remove the background information of the face image and reduce noise interference.
305. Inputting each intercepted face image into a pre-constructed image quality detection model respectively to obtain an image quality score of each face image;
306. filtering face images with the image quality scores smaller than a preset threshold value in the face images, and taking the rest face images as target face images to be identified;
307. inputting the target face image into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face image;
308. extracting face features of the target face image;
309. fusing the face features and the face attribute features of the target face image to obtain multi-dimensional features of the target face image;
310. and respectively matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of the face images with known identities in a preset image database to obtain the face recognition result of the target face image.
Steps 305-310 are identical to steps 103-108 and reference is made specifically to the relevant description of steps 103-108.
Compared with the first embodiment of the present application, the present embodiment proposes a specific manner of capturing each face image included in the image to be identified. The face is detected from the image to be recognized, the detected face image is intercepted, the background information of the non-face area in the image can be removed, and the noise interference of the follow-up face recognition is reduced.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 4 shows a block diagram of a face recognition device according to an embodiment of the present application, corresponding to the face recognition method described in the above embodiment, and only a portion related to the embodiment of the present application is shown for convenience of explanation.
Referring to fig. 4, the apparatus includes:
an image obtaining module 401, configured to obtain an image to be identified, where the image to be identified includes more than one face image;
a face intercepting module 402, configured to intercept each face image included in the image to be identified;
the image quality detection module 403 is configured to input each of the captured face images into a pre-constructed image quality detection model, to obtain an image quality score of each of the face images, where the image quality detection model is a neural network model obtained by training a plurality of sample face images with pre-labeled image quality scores as a training set;
the face filtering module 404 is configured to filter face images with image quality scores smaller than a preset threshold value from the face images, and the remaining face images are used as target face images to be identified;
The face attribute detection module 405 is configured to input the target face image into a pre-constructed face attribute detection model, to obtain a face attribute feature of the target face image, where the face attribute detection model is a neural network model obtained by training a plurality of sample face images with known face attribute features as a training set;
a face feature extraction module 406, configured to extract face features of the target face image;
the feature fusion module 407 is configured to fuse the face feature of the target face image with the face attribute feature to obtain a multidimensional feature of the target face image;
the face matching module 408 is configured to match the multi-dimensional features of the target face image with the multi-dimensional features of the face images with known identities in a preset image database, respectively, so as to obtain a face recognition result of the target face image.
Further, the face attribute detection model includes a face gender detection model, a face age detection model, and a face race detection model, and the face attribute detection module may include:
a face image input unit, configured to input the target face image into the face gender detection model, the face age detection model, and the face race detection model, to obtain a face gender feature, a face age feature, and a face race feature of the target face image, respectively;
And the attribute feature fusion unit is used for fusing the gender feature, the age feature and the race feature of the face to obtain the face attribute feature of the target face image.
Further, the attribute feature fusion unit may include:
a normalization subunit, configured to perform an L1 normalization operation on the face gender feature, the face age feature, and the face race feature;
and the attribute feature fusion subunit is used for fusing the normalized face gender feature, the face age feature and the face race feature.
Still further, the attribute feature fusion subunit may include:
the feature fusion mode determining sun unit is used for inputting the normalized human face gender feature, the human face age feature and the human face race feature into a pre-constructed attribute feature selection model, and determining a target feature fusion mode through an output result of the attribute feature selection model;
a target attribute feature selecting grandson unit, configured to select and fuse target attribute features from the normalized face gender feature, the face age feature and the face race feature according to the target feature fusion manner;
The attribute feature selection model is constructed through the following steps:
acquiring a plurality of model sample face images with known faces, such as gender characteristics, age characteristics and race characteristics;
constructing a plurality of feature fusion modes according to the human face gender feature, the human face age feature and the human face race feature of the model sample human face image, wherein each feature fusion mode comprises more than one human face attribute feature of the human face gender feature, the human face age feature and the human face race feature;
respectively fusing the human face attribute features contained in each feature fusion mode with the human face features of the model sample human face image, then carrying out human face recognition, and counting the recognition accuracy corresponding to each feature fusion mode;
correlating the face gender characteristic, the face age characteristic and the face race characteristic of the model sample image with the characteristic fusion mode with the highest recognition accuracy;
and training to obtain the attribute feature selection model by taking the model sample image in a mode of combining the associated features of the face attribute features as a training set.
Further, the face recognition device may further include:
And the facial feature adjustment module is used for adjusting the facial features of the target facial image according to the target attribute features.
Still further, the facial feature adjustment module may include:
a feature region determining unit, configured to determine a face feature region associated with the target attribute feature;
and the characteristic proportion adjusting unit is used for improving the proportion of the face characteristics in the face characteristic area in the target face image.
Further, the face extraction module may include:
the face detection unit is used for carrying out face detection on the image to be identified by adopting a face detection algorithm;
a position coordinate determining unit, configured to determine position coordinates of face images included in the image to be identified according to the face detection result;
and the face intercepting unit is used for intercepting the image of the area where the position coordinates are located from the image to be identified to obtain each face image.
Embodiments of the present application also provide a computer-readable storage medium storing computer-readable instructions that, when executed by a processor, implement the steps of any one of the face recognition methods as shown in fig. 1 to 3.
The embodiment of the application further provides a server, which comprises a memory, a processor and computer readable instructions stored in the memory and capable of running on the processor, wherein the steps of any one of the face recognition methods shown in fig. 1 to 3 are realized when the processor executes the computer readable instructions.
The present embodiments also provide a computer program product which, when run on a server, causes the server to perform the steps of implementing any one of the face recognition methods as represented in fig. 1 to 3.
Fig. 5 is a schematic diagram of a server according to an embodiment of the present application. As shown in fig. 5, the server 5 of this embodiment includes: a processor 50, a memory 51, and computer readable instructions 52 stored in the memory 51 and executable on the processor 50. The processor 50, when executing the computer readable instructions 52, implements the steps of the various face recognition method embodiments described above, such as steps 101 through 108 shown in fig. 1. Alternatively, the processor 50, when executing the computer readable instructions 52, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of modules 401 through 408 shown in fig. 4.
For example, the computer readable instructions 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present application. The one or more modules/units may be a series of computer readable instruction segments capable of performing a specific function describing the execution of the computer readable instructions 52 in the server 5.
The server 5 may be a computing device such as a smart phone, a notebook, a palm computer, a cloud server, etc. The server 5 may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the server 5 and is not meant to be limiting of the server 5, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the server 5 may also include input and output devices, network access devices, buses, etc.
The processor 50 may be a central processing unit (CentraL Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DigitaL SignaL Processor, DSP), application specific integrated circuits (AppLication Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (fierld-ProgrammabLe Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 51 may be an internal storage unit of the server 5, for example, a hard disk or a memory of the server 5. The memory 51 may be an external storage device of the server 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure DigitaL (SD) Card, a FLash Card (FLash Card) or the like, which are provided on the server 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the server 5. The memory 51 is used to store the computer readable instructions and other programs and data required by the server. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (7)
1. A face recognition method, comprising:
acquiring an image to be identified, wherein the image to be identified comprises more than one face image;
intercepting out each face image contained in the image to be identified;
inputting each intercepted face image into a pre-constructed image quality detection model to obtain an image quality score of each face image, wherein the image quality detection model is a neural network model trained by a plurality of sample face images with pre-labeled image quality scores as a training set;
Filtering face images with the image quality scores smaller than a preset threshold value in the face images, and taking the rest face images as target face images to be identified;
inputting the target face image into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face image, wherein the face attribute detection model is a neural network model trained by a plurality of sample face images with known face attribute characteristics as a training set;
extracting face features of the target face image;
fusing the face features and the face attribute features of the target face image to obtain multi-dimensional features of the target face image;
matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of the face images with known identities in a preset image database respectively to obtain a face recognition result of the target face image;
the face attribute detection model comprises a face gender detection model, a face age detection model and a face race detection model, the step of inputting the target face image into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face image comprises the following steps:
Inputting the target face image into the face gender detection model, the face age detection model and the face race detection model to respectively obtain face gender characteristics, face age characteristics and face race characteristics of the target face image;
fusing the sex characteristics of the human face, the age characteristics of the human face and the race characteristics of the human face to obtain the attribute characteristics of the human face of the target human face image;
the fusing the face gender feature, the face age feature and the face race feature comprises:
performing an L1 normalization operation on the face gender feature, the face age feature, and the face ethnicity feature;
fusing the normalized face gender characteristics, the face age characteristics and the face race characteristics;
the fusing the normalized face gender feature, the face age feature and the face race feature comprises:
inputting the normalized human face gender characteristics, the human face age characteristics and the human face race characteristics into a pre-constructed attribute characteristic selection model, and determining a target characteristic fusion mode through an output result of the attribute characteristic selection model;
Selecting and fusing target attribute features from the normalized face gender features, the face age features and the face race features according to the target feature fusion mode;
the attribute feature selection model is constructed through the following steps:
acquiring a plurality of model sample face images with known faces, such as gender characteristics, age characteristics and race characteristics;
constructing a plurality of feature fusion modes according to the human face gender feature, the human face age feature and the human face race feature of the model sample human face image, wherein each feature fusion mode comprises more than one human face attribute feature of the human face gender feature, the human face age feature and the human face race feature;
respectively fusing the human face attribute features contained in each feature fusion mode with the human face features of the model sample human face image, then carrying out human face recognition, and counting the recognition accuracy corresponding to each feature fusion mode;
correlating the face gender characteristic, the face age characteristic and the face race characteristic of the model sample face image with the characteristic fusion mode with the highest recognition accuracy;
and training to obtain the attribute feature selection model by taking the model sample face image in a mode of combining the associated features of the face attribute features as a training set.
2. The face recognition method of claim 1, wherein after extracting the face features of the target face image, before fusing the face features of the target face image with the face attribute features, further comprising:
and adjusting the face characteristics of the target face image according to the target attribute characteristics.
3. The face recognition method of claim 2, wherein the adjusting the face features of the target face image according to the target attribute features comprises:
determining a face feature area associated with the target attribute feature;
and improving the proportion of the face features in the face feature region in the target face image.
4. A face recognition method as claimed in any one of claims 1 to 3, wherein said capturing individual face images contained in said image to be recognized comprises:
performing face detection on the image to be identified by adopting a face detection algorithm;
determining the position coordinates of each face image contained in the image to be recognized according to the face detection result;
and intercepting the image of the area where the position coordinates are located from the image to be identified to obtain each face image.
5. A face recognition device, comprising:
the image acquisition module is used for acquiring an image to be identified, wherein the image to be identified comprises more than one face image;
the face intercepting module is used for intercepting out each face image contained in the image to be identified;
the image quality detection module is used for respectively inputting each intercepted face image into a pre-constructed image quality detection model to obtain the image quality score of each face image, and the image quality detection model is a neural network model which is trained by taking a plurality of sample face images with pre-labeled image quality scores as a training set;
the face filtering module is used for filtering face images with the image quality scores smaller than a preset threshold value in the face images, and the rest face images are used as target face images to be identified;
the face attribute detection module is used for inputting the target face image into a pre-constructed face attribute detection model to obtain the face attribute characteristics of the target face image, wherein the face attribute detection model is a neural network model obtained by training a plurality of sample face images with known face attribute characteristics as a training set;
The face feature extraction module is used for extracting face features of the target face image;
the feature fusion module is used for fusing the face features and the face attribute features of the target face image to obtain multi-dimensional features of the target face image;
the face matching module is used for respectively matching the multi-dimensional characteristics of the target face image with the multi-dimensional characteristics of the face images with known identities in a preset image database to obtain a face recognition result of the target face image;
the face attribute detection model comprises a face gender detection model, a face age detection model and a face race detection model, and the face attribute detection module comprises:
a face image input unit, configured to input the target face image into the face gender detection model, the face age detection model, and the face race detection model, to obtain a face gender feature, a face age feature, and a face race feature of the target face image, respectively;
the attribute feature fusion unit is used for fusing the gender feature, the age feature and the race feature of the face to obtain the attribute feature of the face of the target face image;
The attribute feature fusion unit comprises:
a normalization subunit, configured to perform an L1 normalization operation on the face gender feature, the face age feature, and the face race feature;
the attribute feature fusion subunit is used for fusing the normalized face gender feature, the face age feature and the face race feature;
the attribute feature fusion subunit includes:
the feature fusion mode determining sun unit is used for inputting the normalized human face gender feature, the human face age feature and the human face race feature into a pre-constructed attribute feature selection model, and determining a target feature fusion mode through an output result of the attribute feature selection model;
a target attribute feature selecting grandson unit, configured to select and fuse target attribute features from the normalized face gender feature, the face age feature and the face race feature according to the target feature fusion manner;
the attribute feature selection model is constructed through the following steps:
acquiring a plurality of model sample face images with known faces, such as gender characteristics, age characteristics and race characteristics;
Constructing a plurality of feature fusion modes according to the human face gender feature, the human face age feature and the human face race feature of the model sample human face image, wherein each feature fusion mode comprises more than one human face attribute feature of the human face gender feature, the human face age feature and the human face race feature;
respectively fusing the human face attribute features contained in each feature fusion mode with the human face features of the model sample human face image, then carrying out human face recognition, and counting the recognition accuracy corresponding to each feature fusion mode;
correlating the face gender characteristic, the face age characteristic and the face race characteristic of the model sample face image with the characteristic fusion mode with the highest recognition accuracy;
and training to obtain the attribute feature selection model by taking the model sample face image in a mode of combining the associated features of the face attribute features as a training set.
6. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the face recognition method according to any one of claims 1 to 4.
7. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the face recognition method according to any one of claims 1 to 4 when executing the computer program.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911042888.4A CN110866466B (en) | 2019-10-30 | 2019-10-30 | Face recognition method, device, storage medium and server |
PCT/CN2019/118567 WO2021082087A1 (en) | 2019-10-30 | 2019-11-14 | Facial recognition method and device, storage medium and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911042888.4A CN110866466B (en) | 2019-10-30 | 2019-10-30 | Face recognition method, device, storage medium and server |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110866466A CN110866466A (en) | 2020-03-06 |
CN110866466B true CN110866466B (en) | 2023-12-26 |
Family
ID=69654200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911042888.4A Active CN110866466B (en) | 2019-10-30 | 2019-10-30 | Face recognition method, device, storage medium and server |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110866466B (en) |
WO (1) | WO2021082087A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111666976B (en) * | 2020-05-08 | 2023-07-28 | 深圳力维智联技术有限公司 | Feature fusion method, device and storage medium based on attribute information |
CN111723762B (en) * | 2020-06-28 | 2023-05-12 | 湖南国科微电子股份有限公司 | Face attribute identification method and device, electronic equipment and storage medium |
CN111860400B (en) * | 2020-07-28 | 2024-06-07 | 平安科技(深圳)有限公司 | Face enhancement recognition method, device, equipment and storage medium |
CN112257581A (en) * | 2020-10-21 | 2021-01-22 | 广州云从凯风科技有限公司 | Face detection method, device, medium and equipment |
CN112396016B (en) * | 2020-11-26 | 2021-07-23 | 武汉宏数信息技术有限责任公司 | Face recognition system based on big data technology |
CN113420616B (en) * | 2021-06-03 | 2023-04-07 | 青岛海信智慧生活科技股份有限公司 | Face recognition method, device, equipment and medium |
CN113344916A (en) * | 2021-07-21 | 2021-09-03 | 上海媒智科技有限公司 | Method, system, terminal, medium and application for acquiring machine learning model capability |
CN113591718A (en) * | 2021-07-30 | 2021-11-02 | 北京百度网讯科技有限公司 | Target object identification method and device, electronic equipment and storage medium |
CN115131859A (en) * | 2022-06-27 | 2022-09-30 | 深圳创维-Rgb电子有限公司 | Gate releasing method and device, gate equipment and computer storage medium |
CN114863540B (en) * | 2022-07-05 | 2022-12-16 | 杭州魔点科技有限公司 | Face attribute analysis-based face recognition online auxiliary method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
WO2017107957A1 (en) * | 2015-12-22 | 2017-06-29 | 中兴通讯股份有限公司 | Human face image retrieval method and apparatus |
CN107563336A (en) * | 2017-09-07 | 2018-01-09 | 廖海斌 | Human face similarity degree analysis method, the device and system of game are matched for famous person |
CN107844781A (en) * | 2017-11-28 | 2018-03-27 | 腾讯科技(深圳)有限公司 | Face character recognition methods and device, electronic equipment and storage medium |
WO2019051814A1 (en) * | 2017-09-15 | 2019-03-21 | 达闼科技(北京)有限公司 | Target recognition method and apparatus, and intelligent terminal |
CN109508654A (en) * | 2018-10-26 | 2019-03-22 | 中国地质大学(武汉) | Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks |
CN109522824A (en) * | 2018-10-30 | 2019-03-26 | 平安科技(深圳)有限公司 | Face character recognition methods, device, computer installation and storage medium |
CN110263621A (en) * | 2019-05-06 | 2019-09-20 | 北京迈格威科技有限公司 | Image-recognizing method, device and readable storage medium storing program for executing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408335A (en) * | 2016-09-05 | 2017-02-15 | 江苏宾奥机器人科技有限公司 | Internet of Things-based mobile robot information push system |
CN106650653B (en) * | 2016-12-14 | 2020-09-15 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Construction method of human face recognition and age synthesis combined model based on deep learning |
CN107742107B (en) * | 2017-10-20 | 2019-03-01 | 北京达佳互联信息技术有限公司 | Facial image classification method, device and server |
-
2019
- 2019-10-30 CN CN201911042888.4A patent/CN110866466B/en active Active
- 2019-11-14 WO PCT/CN2019/118567 patent/WO2021082087A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107957A1 (en) * | 2015-12-22 | 2017-06-29 | 中兴通讯股份有限公司 | Human face image retrieval method and apparatus |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
CN107563336A (en) * | 2017-09-07 | 2018-01-09 | 廖海斌 | Human face similarity degree analysis method, the device and system of game are matched for famous person |
WO2019051814A1 (en) * | 2017-09-15 | 2019-03-21 | 达闼科技(北京)有限公司 | Target recognition method and apparatus, and intelligent terminal |
CN107844781A (en) * | 2017-11-28 | 2018-03-27 | 腾讯科技(深圳)有限公司 | Face character recognition methods and device, electronic equipment and storage medium |
CN109508654A (en) * | 2018-10-26 | 2019-03-22 | 中国地质大学(武汉) | Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks |
CN109522824A (en) * | 2018-10-30 | 2019-03-26 | 平安科技(深圳)有限公司 | Face character recognition methods, device, computer installation and storage medium |
CN110263621A (en) * | 2019-05-06 | 2019-09-20 | 北京迈格威科技有限公司 | Image-recognizing method, device and readable storage medium storing program for executing |
Non-Patent Citations (1)
Title |
---|
"A Deep Face Identification Network Enhanced by Facial Attributes Prediction";Fariborz Taherkhani等;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops》;第666-671页 * |
Also Published As
Publication number | Publication date |
---|---|
WO2021082087A1 (en) | 2021-05-06 |
CN110866466A (en) | 2020-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110866466B (en) | Face recognition method, device, storage medium and server | |
CN109948408B (en) | Activity test method and apparatus | |
CN107423690B (en) | Face recognition method and device | |
CN109086691B (en) | Three-dimensional face living body detection method, face authentication and identification method and device | |
CN111738230B (en) | Face recognition method, face recognition device and electronic equipment | |
JP5801601B2 (en) | Image recognition apparatus, image recognition apparatus control method, and program | |
CN109145742B (en) | Pedestrian identification method and system | |
WO2019071664A1 (en) | Human face recognition method and apparatus combined with depth information, and storage medium | |
Gomez-Barrero et al. | Is your biometric system robust to morphing attacks? | |
WO2019033574A1 (en) | Electronic device, dynamic video face recognition method and system, and storage medium | |
WO2020108075A1 (en) | Two-stage pedestrian search method combining face and appearance | |
CN109426785B (en) | Human body target identity recognition method and device | |
CN111368772B (en) | Identity recognition method, device, equipment and storage medium | |
KR20230107415A (en) | Method for identifying an object within an image and mobile device for executing the method | |
CN111695462B (en) | Face recognition method, device, storage medium and server | |
CN111626371A (en) | Image classification method, device and equipment and readable storage medium | |
TWI798815B (en) | Target re-identification method, device, and computer readable storage medium | |
CN108171138B (en) | Biological characteristic information acquisition method and device | |
US20190347472A1 (en) | Method and system for image identification | |
JP2013186546A (en) | Person retrieval system | |
CN109376717A (en) | Personal identification method, device, electronic equipment and the storage medium of face comparison | |
CN112418189B (en) | Face recognition method, device and equipment for wearing mask and storage medium | |
WO2013075295A1 (en) | Clothing identification method and system for low-resolution video | |
CN111814612A (en) | Target face detection method and related device thereof | |
CN112749605A (en) | Identity recognition method, system and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40018211 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |