CN113657195A - Face image recognition method, face image recognition equipment, electronic device and storage medium - Google Patents

Face image recognition method, face image recognition equipment, electronic device and storage medium Download PDF

Info

Publication number
CN113657195A
CN113657195A CN202110851821.6A CN202110851821A CN113657195A CN 113657195 A CN113657195 A CN 113657195A CN 202110851821 A CN202110851821 A CN 202110851821A CN 113657195 A CN113657195 A CN 113657195A
Authority
CN
China
Prior art keywords
local
face
image
face image
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110851821.6A
Other languages
Chinese (zh)
Inventor
江俊林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110851821.6A priority Critical patent/CN113657195A/en
Publication of CN113657195A publication Critical patent/CN113657195A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a face image recognition method, a face image recognition device, an electronic device and a storage medium, wherein the face image recognition method comprises the following steps: acquiring a first face image, and extracting global face features and a plurality of local face features of the first face image, wherein the local face features respectively correspond to the local images of the first face image; determining local face feature weights of local face features corresponding to the local images according to the shielding proportion of the local images; and identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features and the plurality of local face feature weights. By the method and the device, the problem that in the related technology, the face recognition is carried out through the whole face image, the recognition accuracy rate under the shielding scene is low is solved, and the accuracy of the face image recognition is improved.

Description

Face image recognition method, face image recognition equipment, electronic device and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for recognizing a face image.
Background
The face recognition technology is a biological recognition technology for extracting features of human faces and identifying identities of the human faces. Early face recognition techniques extracted feature information from a human face by artificially designed features in combination with methods of machine learning and image processing, and obtained face similarity by comparing feature information of the face, but the methods were difficult to use in complex unlimited scenes. In recent years, with the rise of deep learning technology, the face recognition technology has made great progress and development, and the appearance of Convolutional Neural Network (CNN for short) enables a model to extract more abstract feature information of a face, so that the face recognition method based on deep learning can greatly improve the accuracy of face recognition.
In the related art, features are extracted from the whole face image through a trained model, and then face recognition is realized based on the extracted features. The method has higher recognition rate in the scene with no face shielding, but if the face shielding exists, part of information of the face is lost, so that the recognition result is influenced, and therefore the recognition rate of the method is reduced in the scene with the face shielding.
At present, no effective solution is provided for the problem that in the related technology, the recognition accuracy rate is low under the shielding scene by carrying out face recognition on the whole face image.
Disclosure of Invention
The embodiment of the application provides a face image recognition method, face image recognition equipment, an electronic device and a storage medium, and aims to at least solve the problem that in the related technology, the recognition accuracy rate is low in a shielding scene when the whole face image is used for face recognition.
In a first aspect, an embodiment of the present application provides a face image recognition method, including:
acquiring a first face image, and extracting global face features and a plurality of local face features of the first face image, wherein the local face features respectively correspond to a plurality of local images of the first face image;
determining a local face feature weight of local face features corresponding to each local image according to the shielding proportion of each local image;
and identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features and the plurality of local face feature weights.
In some embodiments, the extracting the global facial features and the plurality of local facial features of the first facial image includes:
extracting face key points from the first face image, and cutting the first face image into a plurality of local images according to the face key points corresponding to a preset face recognition area;
and for each local image, extracting corresponding local human face features through a local feature model corresponding to the local image.
In some embodiments, the determining, according to the occlusion proportion of each of the local images, a local face feature weight of a local face feature corresponding to each of the local images includes:
for each local image, acquiring the shielding proportion of the local image through a corresponding local feature model;
determining an occlusion confidence corresponding to the local image according to the occlusion proportion;
and determining the local face feature weight according to the shielding confidence.
In some embodiments, the identifying, according to the global face feature, the plurality of local face features, and the plurality of local face feature weights, an identity corresponding to the first face image includes:
calculating a local similarity weight of a local face feature group according to the local face feature weight in the first face image, the local face feature weight in the corresponding second face image and a preset local similarity coefficient;
calculating the similarity of the first face image and the second face image according to the cosine distance between the global face features in the first face image and the global face features in the second face image, the cosine distance between the local face features in the first face image and the corresponding local face features in the second face image, a preset global face feature weight and the local similarity weight;
and judging whether the identity corresponding to the first face image is consistent with the identity corresponding to the second face image according to the similarity.
In some embodiments, the calculating the similarity between the first face image and the second face image according to the cosine distance between the global face features in the first face image and the global face features in the second face image, the cosine distance between the local face features in the first face image and the corresponding local face features in the second face image, a preset global face feature weight, and the local similarity weight includes:
determining local similarity corresponding to the local face feature group according to the cosine distance between the local face features in the first face image and the corresponding local face features in the second face image and the local similarity weight;
determining global similarity according to the cosine distance between the global face features in the first face image and the global face features in the second face image and the global face feature weight;
and superposing the global similarity and the local similarities and carrying out normalization to obtain the similarity between the face in the first face image and the second face image.
In some of these embodiments, prior to said acquiring the first facial image, the method includes training a facial image recognition model, the training a facial image recognition model including:
acquiring a training image, extracting global face features of the training image through a global sub-model, and extracting a plurality of local face features through a local sub-model, wherein the local face features respectively correspond to a plurality of local images of the first face image;
based on the local sub-model, according to the shielding proportion in the local images, determining the local face feature weight of the local face feature corresponding to each local image;
and adjusting the global face feature weight in the global sub-model until the loss value of the global sub-model is converged, and adjusting the local face feature weight of the local sub-model until the loss value of the local sub-model is converged.
In some embodiments, the determining, according to the occlusion proportion of each of the local images, a local face feature weight of a local face feature corresponding to each of the local images includes:
and determining the local face feature weight of the local face feature corresponding to each local image according to the shielding proportion of each local image and at least one of the color and the shape of a shielding object.
In a second aspect, an embodiment of the present application provides a face image recognition device, including an obtaining module, a determining module, and a determining module:
the acquisition module is used for acquiring a first face image and extracting global face features and a plurality of local face features of the first face image, wherein the local face features respectively correspond to the local images of the first face image;
the determining module is used for determining a local face feature weight of a local face feature corresponding to each local image according to the shielding proportion of each local image;
the judging module is used for identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features and the plurality of local face feature weights.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the face image recognition method according to the first aspect.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, which when executed by a processor implements the face image recognition method according to the first aspect.
Compared with the related art, the face image recognition method provided by the embodiment of the application extracts the global face features and the local face features of the first face image by acquiring the first face image, wherein the local face features respectively correspond to the local images of the first face image; determining local face feature weights of local face features corresponding to the local images according to the shielding proportion of the local images; the identity corresponding to the first face image is identified according to the global face features, the local face features and the local face feature weights, the problem that in the related technology, the face identification is carried out through the whole face image, the identification accuracy rate under a shielding scene is low is solved, and the accuracy of face image identification is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a face image recognition method according to an embodiment of the application;
FIG. 2 is a flow chart of a method for obtaining local facial features according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of similarity calculation according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for training a face image recognition model according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a face recognition model according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of face image recognition according to the preferred embodiment of the present application;
fig. 7 is a block diagram of a hardware structure of a terminal of the face image recognition method according to the embodiment of the present application;
fig. 8 is a block diagram of a face image recognition apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Under different scenes, the human face can be shielded by the bang of glasses, a mask or even the human face, and at the moment, if the target recognition is realized only on the basis of the whole face, the influence of the shielded part on the similarity calculation is ignored, so that the accuracy of the human face image recognition is greatly reduced.
The embodiment provides a face image recognition method. Fig. 1 is a flowchart of a face image recognition method according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S110 is to obtain a first face image, and extract a global face feature and a plurality of local face features of the first face image, where the plurality of local face features respectively correspond to a plurality of local images of the first face image.
The first face image in this embodiment may be obtained from a monitoring image captured by a camera, may also be obtained from a monitoring video of a video camera, and may also be a face image stored in a face database in advance. After the first face image is obtained, the global face features and the local face features of the first face image are extracted through different convolutional neural networks. The global face features are obtained based on the whole first face image, all face information of the whole first face image is integrated, the local face features are obtained based on partial regions of the first face image, and more detailed face information can be extracted. In this embodiment, the first face image is divided into a plurality of local images, and a corresponding local face feature is extracted for each local image, so that a plurality of local face features can be obtained for a plurality of local images.
Specifically, the partial image in the present embodiment may be a forehead, an eyebrow, an eye, a nose, a mouth, a chin, or the like. For example, if the forehead, the eyes and the nose and mouth are selected as the local images, the local face features corresponding to the forehead, the eyes and the nose and mouth are extracted from the forehead, the eyes and the nose and mouth regions of the face image, respectively.
And step S120, determining the local face feature weight of the local face feature corresponding to each local image according to the shielding proportion of each local image.
The shielding proportion is the ratio of the area of the shielded part in the local image to the area of the whole local image, the higher the shielding proportion is, the lower the corresponding local face feature weight is, namely, the shielding proportion is in inverse proportion to the local face feature weight, and the specific proportional relation can be adjusted according to different scenes to obtain the optimal similarity calculation result. The process of determining the local face feature weight according to the shielding proportion can be obtained after training of a convolutional neural network model.
For example, when the face recognition area is set as forehead, eyes, nose and mouth, a forehead local image, an eye local image, and a nose and mouth local image can be obtained based on the face image, the shielding ratio of each local image is calculated, and finally, the local face feature weight of the forehead, the local face feature weight of the eyes, and the local face feature weight of the nose and mouth are determined according to the shielding ratio in each local image, so that a plurality of local face feature weights are obtained.
And step S130, identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features and the plurality of local face feature weights.
When the identity of a first face image is identified, the first face image needs to be compared with a second face image which is used as a reference, if the second face image is obtained from a monitoring image or a monitoring video, global face features and local face features need to be extracted, and local face feature weights are calculated.
In this embodiment, the similarity may be calculated according to the global face features, the plurality of local face features, and the plurality of local face feature weights in the first face image, and the global face features, the plurality of local face features, and the plurality of local face feature weights in the second face image, and whether the identities corresponding to the faces in the first face image and the second face image are the same is determined according to the similarity.
Further, when comparing the identities in the first face image and the second face image, the global face feature weight may also be considered. The global face feature weight can be set according to different scenes to obtain the optimal similarity calculation effect. Specifically, the higher the similarity is, the higher the consistency between the identity corresponding to the first facial image and the identity corresponding to the second facial image is, and different scenes may be the intensity of illumination intensity, the angle between the human body and the camera or the video camera, the density of people, and the like.
Further, the plurality of local face features and the plurality of local face feature weights are in one-to-one correspondence, for example, the local face feature corresponding to the forehead region has a local face feature weight of the forehead region, the local face feature corresponding to the eye region has a local face feature weight of the eye region, and different local face features have different local face feature weights.
Through the steps S110 to S130, the present embodiment evaluates the occlusion ratio of the local image, and can flexibly provide the local face feature weight of the local image, so that the face image recognition has more robustness, and therefore, the face recognition method in the present embodiment solves the problem that the recognition accuracy in an occlusion scene is low when the whole face image is used for face recognition in the related art, and improves the accuracy of the face image recognition.
Preferably, before extracting the global face features of the face image, the face image needs to be preprocessed, such as scaling, alignment, and the like, and if necessary, gray level conversion may be performed to unify the format of the face image, so as to improve the calculation accuracy of the processing speed similarity of the face image.
In some embodiments, the local face feature weight of the local face feature corresponding to each local image is determined according to the shielding proportion of each local image and at least one of the color of the shielding object and the shape of the shielding object.
Specifically, considering that the human face is usually shielded by bangs such as bangs, sunglasses, and masks, and these shields have differences such as size, color, and light reflection degree, which all affect the similarity calculation, and ignoring the shielding situation, directly calculating the similarity between different human face images reduces the accuracy, so the shielding attribute is taken into consideration in this embodiment, where the shielding attribute is used to represent the shielding situation in the local image. Specifically, the occlusion attribute includes an occlusion proportion and/or an occlusion property, and in each local image, a local face feature weight of a local face feature corresponding to the local image is determined according to the occlusion proportion and/or the occlusion property of the local image, and in a case where the occlusion attribute is different, for example, the occlusion proportion increases or decreases, the occlusion property changes, and the local face feature weight also changes accordingly.
The characteristics of the shielding object are the characteristics of the shielding object, such as shape, color and the like, the influence of the characteristics of the shielding object on the local face feature weight needs to be set according to the actual scene, for example, the local face feature weight corresponding to the circular glasses is usually greater than the local face feature weight corresponding to the square glasses, the local face feature weight corresponding to the dark-colored lenses is usually less than the local face feature weight corresponding to the light-colored lenses, and the corresponding relationship between the specific characteristics of the shielding object and the local face feature weight needs to be set according to the actual scene. Further, when considering the shielding ratio and the characteristics of the shielding object, the influence coefficients of the shielding ratio and the characteristics of the shielding object on the local human face features need to be set according to a specific actual scene. The process of determining the local face feature weight according to the shielding proportion and the characteristics of the shielding object can be obtained after the convolutional neural network model is trained.
In this embodiment, multiple factors are considered at the same time to flexibly adjust the local face feature weight, so as to obtain a more accurate recognition result.
In some embodiments, fig. 2 is a flowchart of a method for obtaining local facial features according to an embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
step S210, extracting face key points from the first face image, and cutting the first face image into a plurality of local images according to the face key points corresponding to a preset face recognition area.
In this embodiment, a plurality of face recognition regions are preset, and a local image is obtained based on the face recognition regions. Specifically, the face recognition area in this embodiment may be set to be the forehead, the eyebrow, the eyes, the nose, the mouth, the chin, and the like, and after the face recognition area is set, the corresponding partial image may be acquired in the first face image based on the face recognition area. For example, if the face recognition regions are the forehead, the eyes, and the nose, the first face image is divided into local regions corresponding to the forehead, the eyes, and the nose, respectively.
After the face image is acquired, extracting key points in the face image through a face key point detection algorithm, specifically, the face key point detection algorithm may be 21-point key point detection, 68-point key point detection, or 81-point key point detection. Then, based on the face alignment technology, key points corresponding to a preset face recognition area are detected from key points of the face image, and the face image is cut according to the detected key points, so that a plurality of local images are obtained.
For example, key points corresponding to the forehead, the eyes and the nose and mouth are extracted from all key points of the face image, so that the forehead area, the eye area and the nose and mouth area in the face image are identified, and finally the forehead area, the eye area and the nose and mouth area in the face image are cut to obtain a plurality of local images.
In step S220, for each local image, corresponding local face features are extracted through the local feature model corresponding to the local image.
In this embodiment, each local image corresponds to one local feature model, and when extracting local face features, the local face feature models are implemented by the corresponding local feature models, that is, a forehead local image has a specific forehead feature model, an eye local image has a specific glasses feature model, and a nose and mouth local image has a specific nose and mouth feature model.
Through the steps S210 and S220, the present embodiment cuts the face image into a plurality of local images based on the detection and alignment of the key points of the face, and then extracts corresponding local face features using different local feature models, and since the local feature models are obtained by training specific face recognition regions, the corresponding local face features can be better recognized, so that the present embodiment extracts corresponding local face features based on the plurality of local feature models, and based on the recognition result obtained by fusing the plurality of local feature models, the recognition accuracy of the face identity in the first face image can be improved.
In some embodiments, the method for determining the local face feature weight specifically includes: for each local image, acquiring the shielding proportion of the local image through a corresponding local feature model, wherein the shielding proportion is the area ratio of a shielded part in the local image to the whole local image, specifically, identifying the shielded part through the color feature of a pixel in the local image and the area of a connected domain, then calculating the shielded area in the local image and the area of the whole local image, and finally obtaining the shielding proportion; and then, determining an occlusion confidence corresponding to the local image according to the occlusion proportion, and determining a local face feature weight according to the occlusion confidence and the characteristics of the occlusion. The occlusion confidence coefficient is used for representing the confidence level of the local image under the occlusion condition, the higher the occlusion proportion is, the lower the occlusion confidence coefficient is, and the correspondence between the occlusion proportion and the occlusion confidence coefficient can be set by an engineer or obtained through training of a convolutional neural network model. Further, under the condition that only the shielding proportion is considered, the shielding confidence coefficient can be directly used as the local face feature weight, under the condition that the attribute of the shielding object needs to be considered, the coefficient can be set for the shielding confidence coefficient again, and then the local face feature weight is determined together with the attribute of the shielding object.
In this embodiment, the local face feature weight is determined specifically by the occlusion ratio, and can be flexibly adjusted according to the severity of the occlusion condition in the local image, and when the occlusion condition in the local image is severe, for example, the nose and mouth portion is completely occluded by the mask, the local face feature weight of the local image can be set to 0, and the local face feature weight does not participate in the calculation of the similarity, so that the influence of the local face feature weight on the overall similarity calculation result is avoided, and therefore, the accuracy of face image recognition can be further improved by the method for determining the local face feature weight by the occlusion ratio in this embodiment.
The embodiment also provides a similarity calculation method. In the case that different facial images include a first facial image and a second facial image, fig. 3 is a flowchart of a method for calculating similarity according to an embodiment of the present application, as shown in fig. 3, the method includes the following steps:
step S310, calculating the local similarity weight of the local face feature group according to the local face feature weight in the first face image, the local face feature weight in the corresponding second face image and a preset local similarity coefficient.
In this embodiment, the local face feature group includes a group of local face features corresponding to the first face image and the second face image. The first face image and the second face image both have a plurality of local face features, for example, when the preset face recognition area is forehead, eyes and nose and mouth, for each local face feature, the first face image and the second face image both have respective local face feature weights due to different occlusion conditions in the first face image and the second face image. At this time, the weight of the corresponding local face features in different face images in the whole similarity calculation process needs to be calculated first, and the weight is recorded as the local similarity weight. Specifically, the local similarity weight is obtained by the product of the local face feature weight in the first face image, the local face feature weight in the corresponding second face image, and the preset local similarity coefficient.
Specifically, taking the forehead area as an example, the local face features corresponding to the forehead area in the first face image and the local face features corresponding to the forehead area in the second face image are a group of local face feature groups. And calculating a local similarity weight corresponding to the forehead region according to the local face feature weight corresponding to the forehead region in the first face image, the local face feature weight corresponding to the forehead region in the second face image and a preset local similarity coefficient, wherein the local similarity coefficient is a preset value and can be adjusted according to an actual scene.
Step S320, calculating the similarity between the first face image and the second face image according to the cosine distance between the global face feature in the first face image and the global face feature in the second face image, the cosine distance between the local face feature in the first face image and the corresponding local face feature in the second face image, a preset global face feature weight and a local similarity weight.
After the local similarity weight is obtained, the similarity is calculated by using the cosine distance in this embodiment, and in other embodiments, the similarity may also be calculated by using the euclidean distance or the mahalanobis distance, where the cosine distance is a cosine included angle between two features.
Specifically, in this embodiment, cosine distances between all corresponding local face features in the first face image and the second face image are calculated, so as to obtain a plurality of cosine distances corresponding to the local face feature groups, and the similarity between the first face image and the second face image is obtained based on the plurality of cosine distances corresponding to the local face feature groups and the cosine distance of the global face feature. Further, the preset global face feature weight and the local similarity weight are coefficients of respective cosine distances.
And step S330, judging whether the identity corresponding to the first face image is consistent with the identity corresponding to the second face image according to the similarity.
In this embodiment, a similarity threshold may be set, and when the similarity between the first face image and the second face image is greater than or equal to the similarity threshold, it is considered that the faces of the first face image and the second face image are the same. Specifically, the similarity threshold may be obtained by training a convolutional neural network model, or may be adjusted according to an actual scene.
Through the steps S310 to S330, the similarity is calculated based on the cosine distance in this embodiment, so that the accuracy of calculating the similarity can be effectively improved.
Further, the similarity between the first face image and the second face image can be obtained by the following method: determining local similarity corresponding to a local face feature group according to a cosine distance between a local face feature in a first face image and a local face feature in a corresponding second face image and a local similarity weight, preferably determining the local similarity by a product of the cosine distance between the local face features and the local similarity, and obtaining a plurality of local similarities in the embodiment under the condition that the first face image and the second face image have a plurality of local face feature groups; then determining global similarity according to the cosine distance between the global face features in the first face image and the global face features in the second face image and the global face feature weight, and specifically determining local similarity according to the product of the cosine distance between the global face features and the local similarity; and finally, adding the global similarity stack and the plurality of local similarities, and normalizing to obtain the similarity between the face in the first face image and the second face image.
For example, the features extracted from the face image are denoted by f, f1、f2Respectively representing the features in the first face image and the second face image, wherein the features comprise global face features and local face features, in the embodiment, f uses subscript 1 to represent the global face features, and the face recognition is performed in a preset mannerIn the case where the other region includes the forehead, the eyes, and the nose and mouth, subscript 2 denotes a local face feature corresponding to the forehead, subscript 3 denotes a local face feature corresponding to the eyes, and subscript 4 denotes a local face feature corresponding to the nose and mouth, and thus, for the first face image and the second face image, a feature, f, can be obtained1 1
Figure BDA0003182598640000111
f1 2
Figure BDA0003182598640000112
Figure BDA0003182598640000113
The local face feature weight of each local face feature is denoted by P, and similarly, superscript 1 denotes the local face feature weight corresponding to the first face image, superscript 2 denotes the local face feature weight corresponding to the second face image, subscript 2 denotes the local face feature weight corresponding to the forehead, subscript 3 denotes the local face feature weight corresponding to the eyes, and subscript 4 denotes the local face feature weight corresponding to the nostrils, and therefore there is P2 1、P3 1、P4 1、P2 2、P3 2、P4 2。Pi jDetermined according to the shielding confidence coefficient obtained by the face attribute algorithm, and the value range is [0,1 ]]And can be dynamically adjusted, wherein i is 1, 2, 3, 4, and j is 1, 2. For example, if the face in the second face image is sunglasses, the face attribute algorithm gives a specific P according to the size of the sunglasses, the shape of the sunglasses and the color of the sunglasses3 2Numerical values, the more serious the shielding is, the lower the shielding confidence coefficient is, and further the proportion of the local face features in the local image in the recognition is reduced; for another example, if the face in the first face image is not wearing a mask, the face attribute algorithm gives a relatively high occlusion confidence level P4 1It means that the local face features in the local image account for a larger proportion in the recognition.
β1Is a preset global face feature weight, beta2、β3、β4Are all preset local similarity coefficients which respectively correspond to the forehead, the eyes, the nose and the mouth. In this example,. beta.iE (0,1), wherein i is 1, 2, 3, 4, betaiThe value of (c) can be adjusted according to the application scenario.
In an exemplary manner, the first and second electrodes are,
Figure BDA0003182598640000114
and
Figure BDA0003182598640000115
i.e. a set of local facial features, beta2、P2 1And P2 2The product of (d) is the local similarity weight corresponding to the forehead region. This embodiment can be calculated by the following equation 1
Figure BDA0003182598640000116
And
Figure BDA0003182598640000117
the cosine distance between;
Figure BDA0003182598640000121
calculating the local similarity weight of each local face feature group through the following formulas 2 to 4:
Figure BDA0003182598640000122
Figure BDA0003182598640000123
Figure BDA0003182598640000124
wherein alpha is2Representing local similarity weights, alpha, corresponding to local face feature groups of the forehead region3Representing local similarity weights, alpha, corresponding to local face feature groups of the eye region4And representing local similarity weights corresponding to the local face feature groups in the area of the nose and mouth.
Finally, the similarity between the face in the first face image and the second face image can be calculated by the following formula 5:
Figure BDA0003182598640000125
in formula 5, sim represents the similarity between the face in the first face image and the second face image, α1Has a value of beta1A value of (a)1=β1And representing a preset global face feature weight. In particular, the amount of the solvent to be used,
Figure BDA0003182598640000126
a global degree of similarity is indicated,
Figure BDA0003182598640000127
indicates a local similarity corresponding to the forehead region,
Figure BDA0003182598640000128
a local similarity corresponding to an eye region is represented,
Figure BDA0003182598640000129
representing local similarity corresponding to the nose and mouth region.
In the embodiment, the global similarity and the local similarities are superposed and then normalized, that is, the computation of the similarity is realized by adopting a splicing mode, the dimension of the spliced features is equal to the sum of the dimensions of the features before splicing, the local face features are not fused with the global face features, the independence of each local image in the overall similarity judgment is kept, and compared with the method for computing the similarity after feature fusion, the proportion of contribution of each local face feature in the overall similarity can be accurately and intuitively reflected, and the accuracy of the similarity computation is further improved.
Fig. 4 is a flowchart of a training method for a face image recognition model according to an embodiment of the present application, and as shown in fig. 4, the method includes the following steps:
step S410, acquiring a training image, extracting global face features of the training image through a global sub-model, and extracting a plurality of local face features through a local sub-model, wherein the plurality of local face features respectively correspond to a plurality of local images of a first face image; the training image can be derived from a snapshot image in a monitoring process or a frame image in a monitoring video;
step S420, based on the local sub-model, according to the shielding proportion in the local image, determining the local face feature weight of the local face feature corresponding to each local image;
and step S430, adjusting the global face feature weight in the global sub-model until the loss value of the global sub-model is converged, and adjusting the local face feature weight of the local sub-model until the loss value of the local sub-model is converged.
The face image recognition model in the embodiment comprises a global sub-model and a plurality of local sub-models, wherein the global sub-model is used for carrying out feature extraction on the whole face, the loss value comprises a recognition loss value, the local sub-models only carry out face feature extraction on a specific region, the loss value comprises a recognition loss value and a shielding loss value, and losses between different local sub-models are not affected by each other.
For example, in the case that the preset face recognition area is a forehead, an eye, a nose and a mouth, fig. 5 is a schematic structural diagram of a face recognition model according to an embodiment of the present application, as shown in fig. 5, for a face image aligned by face key points, on one hand, the complete face image is sent to a global sub-model for feature extraction, the corresponding convolutional neural network is CNN1, the extracted global face features are sent to a fully connected layer, dimension transformation is performed on the fully connected layer, and finally, a Loss value is calculated through a Loss function, the Loss value of the global sub-model is represented by Loss1, and the Loss value of the global sub-model is used for evaluating the recognition accuracy of the complete face image. On the other hand, the complete face image is cut according to the key points of the face to obtain a plurality of partial images, the partial images respectively correspond to a forehead area, an eye area and a nose-mouth area, each partial image is respectively sent to a corresponding partial sub-model to be trained independently, the partial sub-model of the forehead area is represented by CNN2, the partial sub-model of the eye area is represented by CNN3, and the partial sub-model of the nose-mouth area is represented by CNN 4. For each local submodel, after extracting local face features and calculating through a full connection layer, respectively calculating an identification loss value and an occlusion loss value of the local submodel, wherein the identification loss value of the local submodel is used for evaluating the identification accuracy in a local image, and the occlusion loss value of the local submodel is used for evaluating the accuracy of occlusion attribute calculation in the local image. The recognition Loss value and the occlusion Loss value of each local submodel are represented by Loss2, Loss2 ', Loss3, Loss3 ', Loss4, and Loss4 ', respectively. Therefore, the face image recognition model in this embodiment includes 1 global sub-model and 3 local sub-models.
Specifically, in the training process, labeling corresponding labels of all training images in a training set is required to be performed, wherein the labels include factors such as shielding positions, areas, colors and shapes, 1 represents complete shielding, 0 represents no shielding, and shielding coefficients in other cases range from 0 to 1; during training, each loss value independently updates parameters of the corresponding local submodel, and the loss values of different local submodels are not affected by each other.
Through the steps S410 to S430, the occlusion proportion of the local image is evaluated based on the local sub-model, and the local face feature weight of the local image is flexibly given, so that the face image recognition model has higher robustness. Further, when the face image is recognized in the embodiment, the number of models to be trained is small, and the training cost, the time cost and the storage cost are greatly reduced.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
Because the accuracy of face recognition is low under the conditions of strong illumination, shielding, blurring, large angle between the face and the camera device and the like, the face loses the information of partial eye areas particularly under the shielding condition, such as sunglasses shielding; the mask shields the face to lose the information of the nose and mouth area; bang of bang can cause partial information in the forehead area of the face to be lost. In practical application, when comparing a snap shot image of the same person with an image in a face image library, if the face of the snap shot image wears a mask and the image in the face image library does not wear the mask, the similarity of the two compared images is reduced compared with a real result, so that misjudgment may be generated. Therefore, a model is adopted to extract the features of the whole face image, and the extracted features are a representation of the whole face information and are often neglected in some details.
According to the method and the device, the complete face image is cut into the local images capable of representing the local information of the face, the local images are separately subjected to classification training, and finally, similarity calculation is carried out according to the global face features and the local face features, so that the recognition accuracy can be improved, and good robustness is also achieved for common shielding scenes, such as whether a mask is worn or not, whether sunglasses are worn or not, and whether or not the forehead is Liuhai or not.
Fig. 6 is a schematic structural diagram of face image recognition according to a preferred embodiment of the present application, and as shown in fig. 6, before feature extraction, it may be determined whether a person's face image has a blocking object such as a bang, a mask, a sunglasses, or the like, and if so, information of the blocking object is recorded and stored. Then, each face image is subjected to key point alignment and cutting, and mainly three partial images are cut, wherein the three partial images respectively correspond to a forehead area, an eye area and a nose and mouth area, and the face information content of the three parts is the largest. For the complete face image and the three local images, a group of global face features is obtained after corresponding sub-models are respectively carried out, and the global face features are represented by feature1, and three groups of local face features are represented by feature2, feature3 and feature 4. And during comparison, respectively comparing the feature vectors of the corresponding parts, and finally performing weighted summation on the comparison results of the corresponding parts by combining the shielding attributes, thereby obtaining the final similarity.
Taking a first face image and a second face image as an example, the first face image is a face image to be compared and needs to be retrieved, the second face image is a face image in a face database, and the specific identification comparison process is that the first face image and the second face image are both sent to CNN1 to obtain two global face features; inputting each cut local image into a corresponding CNN module to obtain respective local face features and shielding attributes, identifying based on the local face features, and determining local face feature weights based on the shielding attributes; and finally, calculating the similarity of the first face image and the second face image. In this embodiment, the same operation is performed on the first face image and the second face image, that is, the global face feature and the local face feature are extracted, optionally, under the condition that the global face feature, the local face feature and the local face feature weight corresponding to the second face image are stored in the face database, feature extraction and local face feature weight calculation may be performed only on the first face image, and the related parameters of the second face image may be directly obtained from the face database.
In the related technology, the proportion coefficient of the local face features and the global face features is a fixed value after the model is trained, and cannot be dynamically adjusted according to the shielding attributes, for example, the shielding area of the face is directly determined by the height of the mask worn on the nose, so that the proportion of the local image in the whole face recognition process is influenced, further, the proportion of the local image in the whole similarity is influenced by the shape and color of the shielding object, and therefore, the recognition effect is deteriorated by fixing the weight of the local face features in each local image. In the embodiment, the face shielding condition is accurately judged according to the attributes of the face image, and the accuracy of face recognition is improved by combining an attribute recognition method, wherein the face image recognition model has a plurality of local sub-models, the overall and local proportional weights can be flexibly adjusted, so that the recognition effect is more robust, parameters in each model, such as global face feature weight, local similarity coefficient and the like, can be flexibly set according to scenes, and the scene applicability is higher.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The method embodiments provided in the present application may be executed in a terminal, a computer or a similar computing device. Taking the operation on a terminal as an example, fig. 7 is a block diagram of a hardware structure of the terminal of the face image recognition method according to the embodiment of the present application. As shown in fig. 7, the terminal 70 may include one or more (only one shown in fig. 7) processors 702 (the processors 702 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 704 for storing data, and optionally, a transmission device 706 for communication functions and an input-output device 708. It will be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration and is not intended to limit the structure of the terminal. For example, terminal 70 may also include more or fewer components than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
The memory 704 may be used to store a control program, for example, a software program and a module of an application software, such as a control program corresponding to the face image recognition method in the embodiment of the present application, and the processor 702 executes various functional applications and data processing by running the control program stored in the memory 704, so as to implement the above-mentioned method. The memory 704 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 704 may further include memory located remotely from the processor 702, which may be connected to the terminal 70 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 706 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the terminal 70. In one example, the transmission device 706 includes a Network adapter (NIC) that can be connected to other Network devices via a base station to communicate with the internet. In one example, the transmitting device 706 can be a Radio Frequency (RF) module configured to communicate with the internet via wireless.
The present embodiment further provides a face image recognition device, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the face image recognition device is omitted here. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 8 is a block diagram of a face image recognition apparatus according to an embodiment of the present application, and as shown in fig. 8, the apparatus includes: acquisition module 81, determination module 82, and determination module 83:
an obtaining module 81, configured to obtain a first face image, and extract a global face feature and a plurality of local face features of the first face image, where the plurality of local face features correspond to a plurality of local images of the first face image respectively; a determining module 82, configured to determine, according to the shielding ratio of each local image, a local face feature weight of a local face feature corresponding to each local image; and the judging module 83 is configured to identify an identity corresponding to the first face image according to the global face feature, the plurality of local face features, and the plurality of local face feature weights.
In this embodiment, the occlusion proportion of the local image is evaluated based on the determination module 82, and the local face feature weight of the local image is flexibly given, so that the face image recognition has more robustness.
In some embodiments, the face image recognition device includes a cropping module, and the cropping module is configured to extract face key points in a first face image, and crop the first face image into a plurality of partial images according to the face key points corresponding to a preset face recognition area. And for each local image, extracting corresponding local human face features through a local feature model corresponding to the local image.
In some embodiments, the face image recognition device includes a confidence module, which is configured to, for each local image, obtain an occlusion proportion of the local image through a corresponding local feature model; determining an occlusion confidence corresponding to the local image according to the occlusion proportion; and determining the local face feature weight according to the occlusion confidence.
In some embodiments, the face image recognition device includes a similarity calculation module, where the similarity calculation module is configured to calculate a local similarity weight of a local face feature group according to a local face feature weight in a first face image, a local face feature weight in a corresponding second face image, and a preset local similarity coefficient; calculating the similarity of the first face image and the second face image according to the cosine distance between the global face features in the first face image and the global face features in the second face image, the cosine distance between the local face features in the first face image and the corresponding local face features in the second face image, a preset global face feature weight and a local similarity weight; and judging whether the identity corresponding to the first face image is consistent with the identity corresponding to the second face image according to the similarity.
In some embodiments, the similarity calculation module of the face image recognition device includes a cosine calculation unit, where the cosine calculation unit is configured to determine a local similarity corresponding to a local face feature group according to a cosine distance between a local face feature in a first face image and a local face feature in a corresponding second face image, and a local similarity weight; determining global similarity according to the cosine distance between the global face features in the first face image and the global face features in the second face image and the global face feature weight; and superposing the global similarity and the local similarities and carrying out normalization to obtain the similarity between the face in the first face image and the second face image.
In some embodiments, the face image recognition device includes a training module, the training module is configured to acquire a training image, extract a global face feature of the training image through a global sub-model, and extract a plurality of local face features through a local sub-model, where the plurality of local face features respectively correspond to a plurality of local images of a first face image; based on the local sub-model, determining local face feature weights of local face features corresponding to each local image according to the shielding proportion in the local images; and adjusting the global face feature weight in the global sub-model until the loss value of the global sub-model is converged, and adjusting the local face feature weight of the local sub-model until the loss value of the local sub-model is converged.
In some embodiments, the face image recognition device comprises a pre-processing module for pre-processing the face image, wherein the pre-processing comprises at least one of scaling and aligning.
In some embodiments, the face image recognition device includes a weight optimization module, where the weight optimization module is configured to determine a local face feature weight of a local face feature corresponding to each local image according to a shielding proportion of each local image and at least one of a color and a shape of a shielding object.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
and S1, acquiring the first face image, and extracting the global face features and the local face features of the first face image, wherein the local face features respectively correspond to the local images of the first face image.
And S2, determining the local face feature weight of the local face feature corresponding to each local image according to the shielding proportion of each local image.
And S3, identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features and the plurality of local face feature weights.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the face image recognition method in the foregoing embodiment, the embodiment of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the face image recognition methods in the above embodiments.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A face image recognition method is characterized by comprising the following steps:
acquiring a first face image, and extracting global face features and a plurality of local face features of the first face image, wherein the local face features respectively correspond to a plurality of local images of the first face image;
determining a local face feature weight of local face features corresponding to each local image according to the shielding proportion of each local image;
and identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features and the plurality of local face feature weights.
2. The method according to claim 1, wherein the extracting the global face feature and the plurality of local face features of the first face image comprises:
extracting face key points from the first face image, and cutting the first face image into a plurality of local images according to the face key points corresponding to a preset face recognition area;
and for each local image, extracting corresponding local human face features through a local feature model corresponding to the local image.
3. The method according to claim 1, wherein the determining the local face feature weight of the local face feature corresponding to each local image according to the occlusion ratio of each local image comprises:
for each local image, acquiring the shielding proportion of the local image through a corresponding local feature model;
determining an occlusion confidence corresponding to the local image according to the occlusion proportion;
and determining the local face feature weight according to the shielding confidence.
4. The method according to claim 1, wherein the identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features, and the plurality of local face feature weights comprises:
calculating a local similarity weight of a local face feature group according to the local face feature weight in the first face image, the local face feature weight in the corresponding second face image and a preset local similarity coefficient;
calculating the similarity of the first face image and the second face image according to the cosine distance between the global face features in the first face image and the global face features in the second face image, the cosine distance between the local face features in the first face image and the corresponding local face features in the second face image, a preset global face feature weight and the local similarity weight;
and judging whether the identity corresponding to the first face image is consistent with the identity corresponding to the second face image according to the similarity.
5. The method according to claim 4, wherein the calculating the similarity between the first face image and the second face image according to the cosine distance between the global face features in the first face image and the global face features in the second face image, the cosine distance between the local face features in the first face image and the corresponding local face features in the second face image, a preset global face feature weight and the local similarity weight comprises:
determining local similarity corresponding to the local face feature group according to the cosine distance between the local face features in the first face image and the corresponding local face features in the second face image and the local similarity weight;
determining global similarity according to the cosine distance between the global face features in the first face image and the global face features in the second face image and the global face feature weight;
and superposing the global similarity and the local similarities and carrying out normalization to obtain the similarity between the face in the first face image and the second face image.
6. The method of claim 1, wherein prior to said obtaining the first facial image, the method comprises training a facial image recognition model, the training the facial image recognition model comprising:
acquiring a training image, extracting global face features of the training image through a global sub-model, and extracting a plurality of local face features through a local sub-model, wherein the local face features respectively correspond to a plurality of local images of the first face image;
based on the local sub-model, according to the shielding proportion in the local images, determining the local face feature weight of the local face feature corresponding to each local image;
and adjusting the global face feature weight in the global sub-model until the loss value of the global sub-model is converged, and adjusting the local face feature weight of the local sub-model until the loss value of the local sub-model is converged.
7. The method according to any one of claims 1 to 6, wherein the determining the local face feature weight of the local face feature corresponding to each local image according to the occlusion proportion of each local image comprises:
and determining the local face feature weight of the local face feature corresponding to each local image according to the shielding proportion of each local image and at least one of the color and the shape of a shielding object.
8. The face image recognition device is characterized by comprising an acquisition module, a determination module and a judgment module:
the acquisition module is used for acquiring a first face image and extracting global face features and a plurality of local face features of the first face image, wherein the local face features respectively correspond to the local images of the first face image;
the determining module is used for determining a local face feature weight of a local face feature corresponding to each local image according to the shielding proportion of each local image;
the judging module is used for identifying the identity corresponding to the first face image according to the global face features, the plurality of local face features and the plurality of local face feature weights.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to perform the face image recognition method according to any one of claims 1 to 7.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the steps of the face image recognition method according to any one of claims 1 to 7 when running.
CN202110851821.6A 2021-07-27 2021-07-27 Face image recognition method, face image recognition equipment, electronic device and storage medium Pending CN113657195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110851821.6A CN113657195A (en) 2021-07-27 2021-07-27 Face image recognition method, face image recognition equipment, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110851821.6A CN113657195A (en) 2021-07-27 2021-07-27 Face image recognition method, face image recognition equipment, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113657195A true CN113657195A (en) 2021-11-16

Family

ID=78478776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110851821.6A Pending CN113657195A (en) 2021-07-27 2021-07-27 Face image recognition method, face image recognition equipment, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113657195A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549921A (en) * 2021-12-30 2022-05-27 浙江大华技术股份有限公司 Object recognition method, electronic device, and computer-readable storage medium
CN114550269A (en) * 2022-03-02 2022-05-27 北京百度网讯科技有限公司 Mask wearing detection method, device and medium
CN115810214A (en) * 2023-02-06 2023-03-17 广州市森锐科技股份有限公司 Verification management method, system, equipment and storage medium based on AI face recognition
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956552A (en) * 2016-04-29 2016-09-21 中国人民解放军国防科学技术大学 Face black list monitoring method
CN106355138A (en) * 2016-08-18 2017-01-25 电子科技大学 Face recognition method based on deep learning and key features extraction
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN108664908A (en) * 2018-04-27 2018-10-16 深圳爱酷智能科技有限公司 Face identification method, equipment and computer readable storage medium
CN109829448A (en) * 2019-03-07 2019-05-31 苏州市科远软件技术开发有限公司 Face identification method, device and storage medium
CN110569731A (en) * 2019-08-07 2019-12-13 北京旷视科技有限公司 face recognition method and device and electronic equipment
CN111414879A (en) * 2020-03-26 2020-07-14 北京字节跳动网络技术有限公司 Face shielding degree identification method and device, electronic equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956552A (en) * 2016-04-29 2016-09-21 中国人民解放军国防科学技术大学 Face black list monitoring method
CN106355138A (en) * 2016-08-18 2017-01-25 电子科技大学 Face recognition method based on deep learning and key features extraction
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN108664908A (en) * 2018-04-27 2018-10-16 深圳爱酷智能科技有限公司 Face identification method, equipment and computer readable storage medium
CN109829448A (en) * 2019-03-07 2019-05-31 苏州市科远软件技术开发有限公司 Face identification method, device and storage medium
CN110569731A (en) * 2019-08-07 2019-12-13 北京旷视科技有限公司 face recognition method and device and electronic equipment
CN111414879A (en) * 2020-03-26 2020-07-14 北京字节跳动网络技术有限公司 Face shielding degree identification method and device, electronic equipment and readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549921A (en) * 2021-12-30 2022-05-27 浙江大华技术股份有限公司 Object recognition method, electronic device, and computer-readable storage medium
CN114550269A (en) * 2022-03-02 2022-05-27 北京百度网讯科技有限公司 Mask wearing detection method, device and medium
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN116128514B (en) * 2022-11-28 2023-10-13 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN115810214A (en) * 2023-02-06 2023-03-17 广州市森锐科技股份有限公司 Verification management method, system, equipment and storage medium based on AI face recognition

Similar Documents

Publication Publication Date Title
US11288504B2 (en) Iris liveness detection for mobile devices
CN113657195A (en) Face image recognition method, face image recognition equipment, electronic device and storage medium
CN110147721B (en) Three-dimensional face recognition method, model training method and device
WO2018188453A1 (en) Method for determining human face area, storage medium, and computer device
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN111738735B (en) Image data processing method and device and related equipment
JP2017062778A (en) Method and device for classifying object of image, and corresponding computer program product and computer-readable medium
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
CN111144284B (en) Method and device for generating depth face image, electronic equipment and medium
CN113705290A (en) Image processing method, image processing device, computer equipment and storage medium
CN109711309A (en) A kind of method whether automatic identification portrait picture closes one's eyes
CN112241689A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN111401192A (en) Model training method based on artificial intelligence and related device
CN113192132A (en) Eye catch method and device, storage medium and terminal
WO2024104144A1 (en) Image synthesis method and apparatus, storage medium, and electrical device
CN113468925B (en) Occlusion face recognition method, intelligent terminal and storage medium
Bourbakis et al. Skin-based face detection-extraction and recognition of facial expressions
Borah et al. ANN based human facial expression recognition in color images
CN115830720A (en) Living body detection method, living body detection device, computer equipment and storage medium
CN113435248A (en) Mask face recognition base enhancement method, device, equipment and readable storage medium
CN112200005A (en) Pedestrian gender identification method based on wearing characteristics and human body characteristics under community monitoring scene
CN117274504B (en) Intelligent business card manufacturing method, intelligent sales system and storage medium
CN113095116B (en) Identity recognition method and related product
CN113033307B (en) Object matching method and device, storage medium and electronic device
CN114038197B (en) Scene state determining method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination