WO2021169257A1 - 人脸识别 - Google Patents

人脸识别 Download PDF

Info

Publication number
WO2021169257A1
WO2021169257A1 PCT/CN2020/116486 CN2020116486W WO2021169257A1 WO 2021169257 A1 WO2021169257 A1 WO 2021169257A1 CN 2020116486 W CN2020116486 W CN 2020116486W WO 2021169257 A1 WO2021169257 A1 WO 2021169257A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
face
features
posture
environmental lighting
Prior art date
Application number
PCT/CN2020/116486
Other languages
English (en)
French (fr)
Inventor
王峰
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2021169257A1 publication Critical patent/WO2021169257A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the embodiments of the present application relate to the field of visual recognition technology, in particular to face recognition.
  • Face recognition is widely used in real scenes such as attendance, payment, meeting sign-in, parks or scenic spots, and so on.
  • the face recognition method in the prior art is as follows: first, the user to be recognized needs to register one or more face images and store them in the face database; then, during face recognition, the face images and the face database are collected in real time Compare the face images in to complete face recognition.
  • an embodiment of the present application discloses a face recognition method, including:
  • the facial features of the facial image to be recognized are compared with each facial feature in the facial feature set, respectively, to obtain a facial recognition result of the facial image to be recognized.
  • an embodiment of the present application discloses a face recognition device, including:
  • the feature determination module is used to determine the posture feature, ambient light feature and face feature of the face image to be recognized;
  • the facial feature set forming module is configured to filter out at least one facial feature that matches the posture feature and ambient light feature of the face image to be recognized from a preset facial feature library to form a facial feature set;
  • the face recognition module is configured to compare the face features of the face image to be recognized with each face feature in the face feature set to obtain the face recognition result of the face image to be recognized .
  • an embodiment of the present application also discloses an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor executes the computer program when the computer program is executed.
  • the face recognition method described in the embodiment of the application is described in the embodiment of the application.
  • an embodiment of the present application discloses a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the steps of the face recognition method disclosed in the embodiment of the present application are disclosed.
  • the face recognition method disclosed in the embodiments of this application determines the posture feature, ambient lighting feature, and face feature of the face image to be recognized; and filters out the posture of the face image to be recognized from the preset face feature library. At least one face feature whose feature is consistent with the ambient light feature constitutes a face feature set; the face feature of the face image to be recognized is compared with each face feature in the face feature set to obtain all the face features. The face recognition result of the face image to be recognized is helpful to improve the accuracy of face recognition.
  • the face recognition method disclosed in the embodiment of the application can reduce the comparison of different faces due to the posture and environmental lighting by selecting the registered face images that are the same as or similar to the posture of the face image to be recognized and the environmental lighting conditions for comparison.
  • FIG. 1 is one of the flowcharts of the face recognition method according to the first embodiment of the present application
  • Fig. 2 is the second flow chart of the face recognition method of the first embodiment of the present application.
  • FIG. 3 is one of the schematic structural diagrams of the face recognition device according to the second embodiment of the present application.
  • FIG. 4 is the second structural diagram of the face recognition device according to the third embodiment of the present application.
  • Fig. 5 schematically shows a block diagram of an electronic device for performing the method according to the present application.
  • Fig. 6 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the present application.
  • An embodiment of the present application discloses a face recognition method. As shown in FIG. 1, the method includes: step 110 to step 130.
  • Step 110 Determine the posture feature, ambient lighting feature, and face feature of the face image to be recognized.
  • the face recognition process it is first necessary to obtain at least one face image of the face to be recognized, and input the face image to be recognized into a preset face recognition engine for recognition.
  • the specific method for obtaining the face image to be recognized is not limited, and the lighting environment for obtaining the face image to be recognized is not limited.
  • the posture feature refers to the posture of the face represented by any one or more of the pitch angle, the left-right rotation angle, and the left-right swing angle of the face.
  • the determining the posture feature, the environmental lighting feature, and the face feature of the face image to be recognized includes: determining the posture feature and face key point information of the face image to be recognized matching through the preset face posture recognition model, and, by The preset facial feature extraction model determines the facial features in the face image to be recognized; calculates the region HSV (Hue Saturation Value, a color space) color histogram according to the face key point information, and determines the waiting Recognize the environmental lighting characteristics of face images.
  • HSV Human Saturation Value, a color space
  • the face features to be recognized need to be extracted from the face image to be recognized.
  • the face feature extraction model in the prior art can be used to determine the location.
  • the facial features are represented by vectors of preset dimensions (for example, 256 dimensions).
  • the posture feature of the face image to be recognized is the posture feature of the face in the face image to be recognized
  • the environmental lighting feature of the face image to be recognized is the environmental lighting feature of the face image to be recognized.
  • a preset face gesture recognition model may be used to determine the gesture features of the face in the face image to be recognized, and to determine the key point information of the face in the face image to be recognized.
  • the face gesture recognition model can be built based on a convolutional neural network, and by performing calculations on the input face image, the face key point information and gesture features in the input face image can be output. For example, it is possible to output posture features represented by vectors of any one or more of the pitch angle, left-right rotation angle, and left-right swing angle of the face, and output key point information representing the key points of the face.
  • the ambient light characteristics of the face image to be recognized are determined according to the HSV color histogram of the patch color feature of each face key point. For example, by extracting HSV (Hue (hue), Saturation (saturation), Value (transparency)) color features from an image area with a preset size around each face key point, the HSV color histogram of each face key point is obtained. According to the sequence of the face key points, the HSV color histograms of the face key points in the face image to be recognized are arranged in order, and the obtained vector is used as the environmental lighting feature of the face image to be recognized.
  • the identified key points of the face include 97 points. For example, if you take the 16*16 image area around the key points of each face to extract the HSV color features and calculate the HSV color histogram, you will get 106*3*64 length environmental lighting features vector.
  • Step 120 Filter out at least one face feature that matches the posture feature and the ambient light feature of the face image to be recognized from a preset face feature library to form a face feature set.
  • the method before face recognition is performed, at least one face feature that matches the posture feature of the face image to be recognized and the ambient light feature is selected from the preset face feature library to form a face feature.
  • the method further includes: constructing a face feature library, which includes face features corresponding to preset posture features and preset environment lighting features.
  • constructing a facial feature database includes: acquiring a frontal image of a registered human face; and obtaining the corresponding frontal image by performing a three-dimensional reconstruction on the frontal image based on a preset posture and preset environmental lighting conditions Each preset posture of the registered face and face images under different environmental lighting conditions; acquiring each preset posture of the registered face and a set of facial features of each face image under different environmental lighting conditions; according to the acquisition The preset postures of each registered human face and the facial features of each facial image under different environmental lighting conditions are constructed to construct a facial feature database.
  • each face image after three-dimensional reconstruction corresponds to a set of face features in the face feature library
  • each set of face features corresponds to the posture feature of the posture corresponding to the face image after the three-dimensional reconstruction
  • the preset posture may be a posture defined by the pitch angle, the left-right rotation angle, and the left-right swing angle of the human face.
  • the pitch angle is less than 10 degrees
  • the left and right rotation angle is less than 5 degrees
  • the left and right yaw angle is less than 5 degrees are defined as attitude 1.
  • the pitch angle is less than 20 degrees
  • the left and right rotation angle is less than 5 degrees
  • the left and right yaw angle is less than 5 degrees as attitude 1. 2 Wait.
  • a variety of postures can be determined, and the preset postures are determined according to the requirements of face recognition accuracy and robustness.
  • the environmental lighting conditions are based on the type of light source (such as natural light, light, spotlight), light attributes (such as point light source, parallel light source), point light source position (for example, expressed as (x, y, z)), and parallel light direction (for example, Denoted as ( ⁇ 1, ⁇ 2, ⁇ 3)), illumination color (for example, expressed as (R, G, B)), etc., any one or more of the information is determined.
  • the light source type can be natural light
  • the light attribute is parallel light source
  • the position of the point light source is (1,1,0)
  • the parallel light direction is (0,0,1)
  • the light color is (128,128,128), which indicates the environmental lighting conditions.
  • environmental lighting condition 1 the light source type is natural light, the light attribute is parallel light source, the position of the point light source is (1,1,0), the parallel light direction is (0,0,5), and the light color is (128,128,128).
  • the ambient light conditions are defined as ambient light conditions 2 and so on. According to this method, a variety of ambient light conditions can be determined.
  • the preset posture includes M types and the preset environmental lighting conditions includes N types
  • M*N face images will be obtained.
  • Each face image corresponds to a preset posture and a preset environment lighting condition.
  • there are N face images corresponding to each preset posture and the N face images corresponding to each preset posture correspond to A preset ambient lighting condition.
  • a set of facial features of the registered user U1 can be extracted, so as to obtain the corresponding of the registered user U1 M*N groups of facial features with different preset poses and different preset environmental lighting conditions.
  • M*N groups of facial features corresponding to different preset postures and different preset environmental lighting conditions can be obtained for each registered user.
  • M and N are positive integers.
  • At least one face feature that matches the posture feature of the face image to be recognized and the ambient light feature is selected from the preset face feature library to form a face feature.
  • it further includes: constructing a posture environment lighting model, the posture environment lighting model including each preset environment lighting feature corresponding to each preset posture feature.
  • Each posture feature corresponds to multiple sets of environmental lighting features.
  • constructing a posture environment lighting model includes: for each preset posture feature, determining a number of three-dimensional reconstructed face images corresponding to the preset posture feature; for the plurality of face images corresponding to the same For each face image with preset environmental lighting conditions, determine the environmental lighting characteristics of the face image according to the HSV color histogram of the face key points (patch) patch color characteristics of the face image, and compare the number of people The average value of the environmental lighting features of each face image corresponding to the same preset environmental lighting condition in the face image is determined as the environmental lighting feature corresponding to the preset posture feature and the corresponding preset environmental lighting condition. For example, from each face image of each registered user, a set of environmental lighting features can be extracted.
  • the HSV color histogram solution of extracting the color feature of the key points (patch) of the face from each face image obtained after 3D reconstruction please refer to the specific technical solution for determining the environmental lighting feature in the face image to be recognized in the previous steps , I won’t repeat it here.
  • the above method it is possible to determine the environmental lighting characteristics corresponding to the posture1 and the N environmental lighting conditions, and the environmental lighting characteristics corresponding to the N environmental lighting conditions in each posture.
  • the corresponding relationship between each of the M poses and the environmental lighting characteristics of the N environmental lighting conditions constitutes the pose environmental lighting model.
  • a weight is set for each environmental lighting feature, and the weight is used to calculate the similarity between each environmental lighting feature in the pose environmental lighting model and the environmental lighting feature matching the face image to be recognized .
  • the face feature library includes M face feature sub-libraries, and each face feature sub-library corresponds to a preset pose feature, that is, each face feature sub-library corresponds to a pose, and each The personal face feature sub-library further includes N personal face feature sets, and each face feature set corresponds to a face feature and an environmental lighting feature.
  • each group of facial features can be stored as a gesture feature and an environmental lighting feature index, for example, stored in the form of (posture, condition, character), where posture represents the posture corresponding to the group of facial features Features, condition represents the environmental lighting features corresponding to the group of facial features, and character represents the group of facial features.
  • the screening of at least one face feature that matches the posture feature and ambient light feature of the face image to be recognized from a preset face feature library to form a face feature set includes: Each group of environment lighting features corresponding to the posture features in the preset posture environment lighting model is respectively regarded as the designated environment lighting features; according to the current weight of each group of the designated environment lighting features, the environment lighting features and each group are respectively calculated The similarity probability of the specified environmental lighting feature; determine a set of the specified environmental lighting feature corresponding to the maximum similarity probability as the current environmental lighting feature; select from a preset face feature library with the posture feature and the It describes the facial features matched by the current environment lighting features to form a facial feature set.
  • the N types of environment lighting features corresponding to the posture feature posture1 can be determined as N types of designated environment lighting features; then, calculate them separately
  • the similarity probabilities of the above-mentioned N types of specified environmental lighting features and the environmental lighting features of the face image to be recognized are matched.
  • the similarity probability of the specified environmental lighting feature and the environmental lighting feature matching the face image to be recognized reflects the degree of similarity between the collection environment lighting conditions of the face image to be recognized and the specified environmental lighting conditions.
  • the environmental lighting feature matching the face image to be recognized The greater the probability of similarity of a certain designated environmental lighting feature, the more similar the acquisition environmental lighting conditions of the face image to be recognized and the environmental lighting conditions corresponding to the certain designated environmental lighting feature.
  • a group of the specified environmental lighting characteristics corresponding to the greatest similarity probability may be determined as the current environmental lighting characteristics.
  • the screening of at least one face feature that matches the posture feature and ambient light feature of the face image to be recognized from a preset face feature library to form a face feature set includes : Regard each group of environmental lighting features corresponding to the posture features in the preset posture environmental lighting model as the designated environmental lighting features; according to the current weight of each group of the designated environmental lighting Group the similarity probabilities of the specified environmental lighting features; select the person corresponding to the specified environmental lighting feature whose similarity probability meets the preset condition from the facial features matching the posture feature in the preset facial feature library Face features, respectively establishing a face feature subset corresponding to each set of the specified environmental lighting features; for each user, weighted fusion of the face features of the user in each of the face feature subsets , The user's fusion face feature is obtained, where the weight of the face feature during weighted fusion and the similarity probability of the specified environmental lighting feature corresponding to the face feature subset where the face feature is located is positive Correlation; determining the merged facial features
  • the preset condition may be a preset number group (such as 3 groups) with the largest similarity probability, and the preset number group is determined according to the number of lighting environment features; or, for example, it may be 3 groups.
  • the preset condition may be that the similarity probability is greater than a preset value, for example, the preset value is 0.
  • the N types of environmental lighting features corresponding to the posture feature posture1 can be determined as N types of specified environmental lighting features; then, they are calculated separately
  • the similarity probabilities of the above-mentioned N types of specified environmental lighting features and the environmental lighting features of the face image to be recognized are matched.
  • the face feature corresponding to the specified environmental lighting feature such as condition1, condition3, condition5
  • the face feature subsets corresponding to each set of the specified environmental lighting features are established respectively, and three face feature subsets S1, S2, and S3 can be obtained;
  • the facial features (such as character1, character2, and character3) in the feature subset are respectively passed through the formula Perform weighted fusion to obtain the user's facial features after fusion.
  • character(i) represents a set of facial features of a certain user in the facial feature subset i
  • ⁇ (i) represents the specified environmental lighting feature and the face image to be recognized corresponding to the facial feature subset i
  • the similarity probability of the designated environment lighting feature K represents the number of face feature groups with the largest similarity probability selected.
  • the merged facial features of each registered user constitute a facial feature set.
  • the face can be improved Identify the robustness to ambient lighting.
  • the similarity probability between the environmental lighting feature matched by the face image to be recognized and any specified environmental lighting feature is calculated by the following formula:
  • Similarity probability weight*G(x i , e), where, x is the face feature of the face image to be recognized, and x i is the current specified ambient lighting feature.
  • other methods may also be used to calculate the similarity probability.
  • the similarity probability of the environmental lighting feature matched by the face image to be recognized and the specified environmental lighting feature is positively correlated with the weight of the specified environmental lighting feature, and the difference between the environmental lighting feature matched with the face image to be recognized and the specified environmental lighting feature The distance is positively correlated.
  • the weight of each environment lighting feature when the pose environment lighting model is established, the weight of each environment lighting feature will be set to an equal initial value, and the weight of each environment lighting feature in the pose environment lighting model will dynamically change.
  • the method after screening at least one face feature that matches the posture feature and ambient light feature of the face image to be recognized from the preset face feature library, and forming a face feature set, the method further includes: Step 140.
  • Step 140 According to the environmental lighting characteristics, update the weights used to calculate the similarity between each of the specified environmental lighting characteristics and the environmental lighting characteristics in the posture environmental lighting model.
  • the step of updating the weight of the similarity between the specified environmental lighting feature and the environmental lighting feature may be selected from a preset face feature library to match the posture feature and the ambient light feature of the face image to be recognized After the step of forming the face feature set of at least one face feature, or compare the face feature of the face image to be recognized with each face feature in the face feature set to obtain the face feature set. The step of recognizing the face recognition result of the face image is executed afterwards.
  • the updating weights used to calculate the similarity between each of the specified environmental lighting characteristics and the environmental lighting characteristics in the posture environmental lighting model according to the environmental lighting characteristics includes: The current weight of each group of specified environmental lighting features corresponding to the pose feature in the posture environmental lighting model is calculated, and the similarity between the environmental lighting feature matched by the face image to be recognized and each group of the specified environmental lighting feature is calculated respectively Probability; according to the similarity probability, update the current weight of the specified environmental lighting feature, so that the updated weight of each set of the specified environmental lighting feature is positively correlated with the similarity probability of the set of specified environmental lighting features and the environmental lighting feature .
  • the current weight of the specified environment lighting feature is updated with the similar distance between the environment illumination feature matched with the face image to be recognized and the specified environment illumination feature and then normalized to update each specified environment illumination.
  • the similarity probability of the environmental lighting feature matching with the face image to be recognized can be affected, thereby further affecting the people selected based on the similarity probability.
  • Face feature collection For example, by performing face recognition on the face image p1 collected in the morning, the weights of various environmental lighting features in the posture environmental lighting model are updated according to the morning environmental lighting characteristics. After the feature is updated, it is the same or similar to the morning environmental lighting conditions The face feature corresponding to the specified environmental lighting feature will have a greater probability of being selected for face matching with the face image p1, so that the accuracy of face recognition of each face image collected in the morning can be improved.
  • the weights of various environmental lighting features in the posture environmental lighting model are updated according to the environmental lighting characteristics in the evening.
  • the face features corresponding to the specified environmental lighting features with the same or similar conditions will have a greater probability of being selected for face matching with the face image p2, which can improve the accuracy of face recognition for each face image collected in the evening .
  • Step 130 Compare the facial features of the facial image to be recognized with each facial feature in the facial feature set to obtain a facial recognition result of the facial image to be recognized.
  • the face features matched by the face image to be recognized and each group of faces in the face feature set can be calculated separately.
  • the facial features of the facial image to be recognized are compared with each of the facial features in the facial feature set.
  • the facial features in the facial feature library that match the facial image to be recognized are determined. So far, the identity information of the facial image to be recognized can be further determined.
  • the face recognition method disclosed in the embodiments of this application determines the posture feature, ambient lighting feature, and face feature of the face image to be recognized; and filters out the posture of the face image to be recognized from the preset face feature library. At least one face feature whose feature is consistent with the ambient light feature constitutes a face feature set; the face feature of the face image to be recognized is compared with each face feature in the face feature set to obtain all the face features.
  • the face recognition result of the face image to be recognized is helpful to improve the accuracy of face recognition.
  • the face recognition method disclosed in the embodiment of the application can reduce the comparison of different faces due to the posture and environmental lighting by selecting the registered face images that are the same as or similar to the posture of the face image to be recognized and the environmental lighting conditions for comparison. The impact brought by it can cope with complex environmental lighting changes and posture changes, thereby improving the accuracy of face recognition in actual scenes.
  • this application establishes a posture environment lighting model in advance, calculates the weights of the similarity probabilities of various environment lighting features included in the posture environment lighting model and the current environment lighting features, and dynamically adjusts the weights so that the face recognition process can be dynamic Learning real-time environmental lighting conditions further improves the accuracy of face recognition.
  • An embodiment of the present application discloses a face recognition device. As shown in FIG. 3, the device includes:
  • the feature determining module 310 is used to determine the posture feature, the ambient light feature, and the face feature of the face image to be recognized;
  • the facial feature set composing module 320 is configured to filter out at least one facial feature that matches the posture feature of the facial image to be recognized and the ambient light feature from a preset facial feature library to form a facial feature set;
  • the facial recognition module 330 is configured to compare the facial features of the facial image to be recognized with each facial feature in the facial feature set to obtain the facial recognition of the facial image to be recognized result.
  • the device further includes:
  • the weight update module 340 is configured to update the weight used to calculate the similarity between each specified environmental lighting feature and the environmental lighting feature in the posture environmental lighting model according to the environmental lighting feature.
  • the updating the weights used to calculate the similarity between each of the specified environmental lighting features and the environmental lighting features in the posture environmental lighting model according to the environmental lighting characteristics includes:
  • the current weight of the designated environment lighting feature is updated, so that the updated weight of each group of the designated environment lighting feature is positively correlated with the similarity probability of the group of designated environment lighting features and the environment lighting feature.
  • the feature determining module 310 is further configured to:
  • the facial feature set forming module 320 is further configured to:
  • Each group of environment lighting features corresponding to the posture features in the preset posture environment lighting model are respectively used as designated environment lighting features;
  • the face features matching the posture feature and the current environment lighting feature are selected from a preset face feature library to form a face feature set.
  • the facial feature set forming module 320 is further configured to:
  • Each group of environment lighting features corresponding to the posture features in the preset posture environment lighting model are respectively used as designated environment lighting features;
  • weighted fusion is performed on the facial features of the user in each of the facial feature subsets to obtain the fused facial features of the user, where the facial features in the weighted fusion are performed.
  • the weight of is positively correlated with the similarity probability of the specified environmental lighting feature corresponding to the facial feature subset where the facial feature is located;
  • the merged facial features of each user are determined to form a facial feature set.
  • the device further includes:
  • the face feature library construction module 350 is configured to build a face feature library, which includes face features corresponding to preset posture features and preset environment lighting features.
  • the face feature database construction module 350 is further configured to:
  • the face recognition device disclosed in the embodiment of the present application is used to implement the face recognition method described in the first embodiment of the present application.
  • the specific implementation of each module of the device will not be described in detail, and please refer to the specific implementation of the corresponding steps in the method embodiment. Way.
  • the face recognition device disclosed in the embodiment of the present application determines the posture feature, ambient lighting feature, and face feature of the face image to be recognized; and filters out the posture of the face image to be recognized from the preset face feature library. At least one face feature whose feature is consistent with the ambient light feature constitutes a face feature set; the face feature of the face image to be recognized is compared with each face feature in the face feature set to obtain all the face features. The face recognition result of the face image to be recognized is helpful to improve the accuracy of face recognition.
  • the face recognition device disclosed in the embodiment of the present application can reduce the comparison of different faces due to the posture and environmental lighting by selecting the registered face images that are the same as or similar to the posture of the face image to be recognized and the environmental lighting conditions for comparison. The impact brought by it can cope with complex environmental lighting changes and posture changes, thereby improving the accuracy of face recognition in actual scenes.
  • the present application establishes a posture environment lighting model in advance, and dynamically adjusts the similarity probability calculation weights of various environment lighting features included in the posture environment lighting model and the current environment lighting features, so that the face recognition process can be dynamically learned Real-time environmental lighting conditions further improve the accuracy of face recognition.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.
  • the various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the electronic device according to the embodiments of the present application.
  • This application can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.
  • FIG. 5 shows an electronic device that can implement the method according to the present application.
  • the electronic device may be a PC, a mobile terminal, a personal digital assistant, a tablet computer, etc.
  • the electronic device traditionally includes a processor 520, a memory 510, and a program code 530 that is stored on the memory 510 and can run on the processor 520.
  • the processor 520 executes the program code 530, the above embodiments are implemented.
  • the memory 510 may be a computer program product or a computer readable medium.
  • the memory 510 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 510 has a storage space 5101 of a program code 530 of a computer program for executing any method steps in the above-mentioned method.
  • the storage space 5101 used for the program code 530 may include various computer programs respectively used to implement various steps in the above method.
  • the program code 530 is computer readable code. These computer programs can be read from or written into one or more computer program products. These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards, or floppy disks.
  • the computer program includes computer readable code, which when run on an electronic device, causes the electronic device to execute the method according to the above-mentioned embodiment.
  • the embodiment of the present application also discloses a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the steps of the face recognition method as described in the first embodiment of the present application are realized.
  • Such a computer program product may be a computer-readable storage medium, and the computer-readable storage medium may have storage segments, storage spaces, etc., arranged similarly to the storage 510 in the electronic device shown in FIG. 5.
  • the program code may be compressed and stored in the computer-readable storage medium in an appropriate form, for example.
  • the computer-readable storage medium is usually a portable or fixed storage unit as described with reference to FIG. 6.
  • the storage unit includes computer readable codes 530', which are codes read by a processor, and when executed by the processor, these codes implement the steps in the method described above.
  • any reference signs placed between parentheses should not be constructed as a limitation to the claims.
  • the word “comprising” does not exclude the presence of elements or steps not listed in the claims.
  • the word “a” or “an” preceding an element does not exclude the presence of multiple such elements.
  • the application can be realized by means of hardware including several different elements and by means of a suitably programmed computer. In the unit claims listing several devices, several of these devices may be embodied in the same hardware item.
  • the use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种人脸识别方法,属于视觉识别技术领域。本申请实施例公开的人脸识别方法包括:确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;从预设人脸特征库中筛选出与待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;将待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获取待识别人脸图像的人脸识别结果。

Description

人脸识别
本申请要求在2020年02月24日提交中国专利局、申请号为202010113903.6、发明名称为“人脸识别方法、装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及视觉识别技术领域,特别是涉及人脸识别。
背景技术
人脸识别被广泛的应用于考勤、支付、会议签到、园区或景区刷脸进入等实际场景。现有技术中的人脸识别方法为:首先需要待识别用户注册一张或多张人脸图像,存入人脸库中;然后,在人脸识别时,实时采集人脸图像与人脸库中的人脸图像进行比对,完成人脸识别。
发明内容
第一方面,本申请实施例公开了一种人脸识别方法,包括:
确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;
从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;
将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获得所述待识别人脸图像的人脸识别结果。
第二方面,本申请实施例公开了一种人脸识别装置,包括:
特征确定模块,用于确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;
人脸特征集合构成模块,用于从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;
人脸识别模块,用于将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获得所述待识别人脸图像的人脸识别结果。
第三方面,本申请实施例还公开了一种电子设备,包括存储器、处理器及存储在所述存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本申请实施例所述的人脸识别方法。
第四方面,本申请实施例公开了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时本申请实施例公开的人脸识别方法的步骤。
本申请实施例公开的人脸识别方法,通过确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获取所述待识别人脸图像的人脸识别结果,有助于提升人脸识别的准确度。本申请实施例公开的人脸识别方法,通过选择与待识别人脸图像的姿态和环境光照条件相同或相近的注册人脸图像进行比对,可以减少由于姿态和环境光照的不同人脸比对带来的影响,能够应对复杂的环境光照变化和姿态变化,从而提升实际场景的人脸识别准确度。上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
附图说明
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图1是本申请实施例一的人脸识别方法流程图之一;
图2是本申请实施例一的人脸识别方法流程图之二;
图3是本申请实施例二的人脸识别装置结构示意图之一;
图4是本申请实施例三的人脸识别装置结构示意图之二;
图5示意性地示出了用于执行根据本申请的方法的电子设备的框图;以 及
图6示意性地示出了用于保持或者携带实现根据本申请的方法的程序代码的存储单元。
具体实施例
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例一
本申请实施例公开的一种人脸识别方法,如图1所示,所述方法包括:步骤110至步骤130。
步骤110,确定待识别人脸图像的姿态特征、环境光照特征和人脸特征。
在人脸识别过程中,首先需要获取至少一张待识别人脸的人脸图像,并将待识别人脸图像输入至预设的人脸识别引擎进行识别。本申请实施例中,对获取待识别人脸图像的具体方式不做限定,对获取待识别人脸图像的光照环境不做限定。
本申请的一些实施例中,所述姿态特征指通过人脸的俯仰角度、左右旋转角度和左右摇摆角度中的任意一种或多种角度表示的人脸姿态。所述确定待识别人脸图像的姿态特征、环境光照特征和人脸特征,包括:通过预设人脸姿态识别模型确定待识别人脸图像匹配的姿态特征和人脸关键点信息,以及,通过预设人脸特征提取模型确定所述待识别人脸图像中的人脸特征;根据所述人脸关键点信息计算区域HSV(Hue Saturation Value,一种颜色空间)颜色直方图,确定所述待识别人脸图像的环境光照特征。在获取到一张待识别人脸图像之后,首先,需要从待识别人脸图像中提取待识别的人脸特征,本申请具体实施时,可以采用现有技术中的人脸特征提取模型确定所述待识别人脸图像中的人脸特征,其中,所述预设人脸特征提取模型可以神经网络模型,也可以为算法模型;或者,采用现有技术中的其他方法确定待识别人脸图像中的人脸特征。通常,所述人脸特征通过预设维度(如256维)的向量表示。本申请具体实施时,还需要确定该待识别人脸图像匹配的姿态特征和环境光照特征。其中,待识别人脸图像的姿态特征为该待识别人脸图像中的人脸的姿态特征;待识别人脸图像的环境光照特征为采集该待识别人脸图 像的环境光照特征。
本申请的一些实施例中,可以通过预设人脸姿态识别模型确定该待识别人脸图像中人脸的姿态特征,以及,确定该待识别人脸图像中人脸关键点信息。其中,所述人脸姿态识别模型可以基于卷积神经网络搭建,通过对输入的人脸图像进行运算,可以输出所述输入的人脸图像中的人脸关键点信息和姿态特征。例如,可以输出通过人脸的俯仰角度、左右旋转角度和左右摇摆角度中的任意一种或多种角度的向量表示的姿态特征,以及输出表征人脸关键点的人脸关键点信息。本申请的另一些实施例中,还可以采用现有技术中的其他方法确定该待识别人脸图像中人脸的姿态特征和人脸关键点信息。本申请对确定人脸关键点和人脸图像中姿态特征的具体技术方案不做限定。
在确定了待识别人脸图像中的人脸关键点之后,根据各人脸关键点的(patch)补丁颜色特征的HSV颜色直方图确定所述待识别人脸图像的环境光照特征。例如,通过对各人脸关键点周围预设大小的图像区域提取HSV(Hue(色调),Saturation(饱和度),Value(透明度))颜色特征,得到各人脸关键点的HSV颜色直方图,按照各人脸关键点的排列顺序,将待识别人脸图像中各人脸关键点的HSV颜色直方图依序排列,将得到的向量作为该待识别人脸图像的环境光照特征。以确定的人脸关键点包括97个点举例,如果取各人脸关键点周围16*16的图像区域提取HSV颜色特征并计算HSV颜色直方图,将得到106*3*64长度的环境光照特征向量。
步骤120,从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合。
本申请具体实施时,在进行人脸识别之前,即从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合的步骤之前,还包括:构建人脸特征库,所述人脸特征库中包括与预设姿态特征和预设环境光照特征对应的人脸特征。
本申请的一些实施例中,构建人脸特征库包括:获取注册人脸的正面图像;通过对所述正面图像进行基于预设姿态和预设环境光照条件的三维重建,获得所述正面图像对应的各预设姿态、不同环境光照条件下的人脸图像;获取所述注册人脸的各预设姿态、不同环境光照条件下的每张所述人脸图像的一组人脸特征;根据获取的每张注册人脸的各预设姿态、不同环境光照条件下的每张所述人脸图像的人脸特征,构建人脸特征库。即三维重建后的每张人脸图像对应所述人脸特征库中的一组人脸特征,每组所述人脸特征对应 相应三维重建后的人脸图像对应的姿态的姿态特征,以及该三维重建后的人脸图像对应的环境光照条件的环境光照特征。从人脸图像中提取人脸特征的具体方法如前一步骤所述,此处不再赘述。
其中,所述预设姿态可以为通过人脸的俯仰角度、左右旋转角度和左右摇摆角度定义的姿态。例如,将俯仰角度小于10度、左右旋转角度小于5度、左右摇摆角度小于5度定义为姿态1;将俯仰角度小于20度、左右旋转角度小于5度、左右摇摆角度小于5度定义为姿态2等等。按照此方法可以确定多种姿态,所述预设姿态根据人脸识别精度和鲁棒性需求确定。
其中,环境光照条件通过光源种类(如自然光,灯光,射灯)、光线属性(如点光源,平行光源)、点光源位置(例如表示为(x,y,z))、平行光照方向(例如表示为(θ1,θ2,θ3))、光照颜色(例如表示为(R,G,B))等中的任意一项或多项信息确定。例如,可以将光源种类为自然光、光线属性为平行光源、点光源位置为(1,1,0)、平行光照方向为(0,0,1)、光照颜色为(128,128,128)表示的环境光照条件定义为环境光照条件1;将光源种类为自然光、光线属性为平行光源、点光源位置为(1,1,0)、平行光照方向为(0,0,5)、光照颜色为(128,128,128)表示的环境光照条件定义为环境光照条件2等。按照此方法可以确定多种环境光照条件。
本申请的一些实施例中,以预设姿态包括M种、预设环境光照条件包括N种举例,对于注册用户U1的注册图像P1,经过三维重建之后,将得到M*N张人脸图像,每张人脸图像对应一种预设姿态和一种预设环境光照条件,其中,对应每种预设姿态的人脸图像包括N张,对应每种预设姿态的N张人脸图像分别对应一种预设环境光照条件。进一步的,从注册用户U1的经过三维重建得到的M*N张人脸图像中的每张人脸图像中,都可以提取到注册用户U1的一组人脸特征,从而得到注册用户U1的对应不同预设姿态、不同预设环境光照条件的M*N组人脸特征。按照此方法,可以获得每个注册用户的对应不同预设姿态、不同预设环境光照条件的M*N组人脸特征。本申请实施例中,M和N为正整数。从三维重建后得到的人脸图像中提取人脸特征的方案参见前述步骤中确定待识别人脸图像中人脸特征的具体技术方案,此处不再赘述。
本申请具体实施时,在进行人脸识别之前,即从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合的步骤之前,还包括:构建姿态环境光照模型,所述姿态 环境光照模型中包括与各预设姿态特征对应的各预设环境光照特征。每种姿态特征对应多组环境光照特征。
本申请的一些实施例中,构建姿态环境光照模型包括:对于每种预设姿态特征,确定对应该预设姿态特征的三维重建后的若干人脸图像;对于所述若干人脸图像中对应同一种预设环境光照条件的每张人脸图像,根据该人脸图像的人脸关键点(patch)补丁颜色特征的HSV颜色直方图确定该人脸图像的环境光照特征,并将所述若干人脸图像中对应同一种预设环境光照条件的每张人脸图像的环境光照特征的平均值确定为对应该预设姿态特征和相应预设环境光照条件的环境光照特征。例如,从每个注册用户的每张人脸图像中,可以提取到一组环境光照特征。将所有注册用户的对应一指定预设姿态(如姿态posture1)和一指定预设环境光照条件(如环境光照条件condition1)的人脸图像进行人脸关键点(patch)补丁颜色特征的HSV颜色直方图提取,得到每个用户在姿态posture1和环境光照条件condition1下的人脸图像的环境光照特征。然后,将所述注册用户在姿态posture1和环境光照条件condition1下的人脸图像的环境光照特征相加后求平均值,得到的结果作为姿态posture1对应的一种环境光照特征(即姿态posture1对应环境光照条件condition1的环境光照特征)。其中,从三维重建后得到的每张人脸图像中提取人脸关键点(patch)补丁颜色特征的HSV颜色直方图的方案参见前述步骤中确定待识别人脸图像中环境光照特征的具体技术方案,此处不再赘述。
按照上述方法可以确定对应姿态posture1和N种环境光照条件的环境光照特征,以及,对应每种姿态下N种环境光照条件的环境光照特征。M种姿态中每种姿态与N种环境光照条件的环境光照特征的对应关系,构成了姿态环境光照模型。
本申请一些实施例中,每种环境光照特征设置有权重,该权重用于计算所述姿态环境光照模型中各所述环境光照特征与待识别人脸图像匹配的所述环境光照特征的相似度。
本申请的一些实施例中,所述人脸特征库中包括M个人脸特征子库,每个人脸特征子库对应一种预设姿态特征,即每个人脸特征子库对应一种姿态,每个人脸特征子库中进一步包括N个人脸特征集合,每个人脸特征集合对应一种人脸特征和一种环境光照特征。
本申请的另一些实施例中,每组人脸特征可以以姿态特征和环境光照特 征索引存储,例如以(posture,condition,character)的形式存储,其中,posture表示该组人脸特征对应的姿态特征,condition表示该组人脸特征对应的环境光照特征,character表示该组人脸特征。
本申请的一些实施例中,所述从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合,包括:将预设姿态环境光照模型中所述姿态特征对应的每组环境光照特征,分别作为指定环境光照特征;根据每组所述指定环境光照特征当前的权重,分别计算所述环境光照特征与每组所述指定环境光照特征的相似概率;确定最大的所述相似概率对应的一组所述指定环境光照特征,作为当前环境光照特征;从预设人脸特征库中选择与所述姿态特征和所述当前环境光照特征匹配的人脸特征,构成人脸特征集合。
例如,假设待识别人脸图像匹配的姿态特征为posture1,则根据前述建立的姿态环境光照模型,可以确定姿态特征posture1对应的N种环境光照特征,作为N种指定环境光照特征;之后,分别计算上述N种指定环境光照特征与待识别人脸图像匹配的环境光照特征的相似概率。指定环境光照特征与待识别人脸图像匹配的环境光照特征的相似概率反映了待识别人脸图像的采集环境光照条件与指定环境光照条件的相似程度,待识别人脸图像匹配的环境光照特征与某一指定环境光照特征的相似概率越大,说明该待识别人脸图像的采集环境光照条件与所述某一指定环境光照特征对应的环境光照条件越相似。本申请的一些实施例中,可以确定最大的所述相似概率对应的一组所述指定环境光照特征,作为当前环境光照特征。之后,将人脸特征库中所有注册用户的,匹配当前环境光照特征和姿态特征的注册人脸特征,构成人脸特征集合,用于进行人脸比对。
本申请的另一些实施例中,所述从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合,包括:将预设姿态环境光照模型中所述姿态特征对应的每组环境光照特征,分别作为指定环境光照特征;根据每组所述指定环境光照特征当前的权重,分别计算所述环境光照特征与每组所述指定环境光照特征的相似概率;从预设人脸特征库中与所述姿态特征匹配的人脸特征中,选择所述相似概率符合预设条件的所述指定环境光照特征对应的人脸特征,分别建立与每组所述指定环境光照特征对应的人脸特征子集;对于每个用户,将该用户在每个所述人脸特征子集中的所述人脸特征分别进行加权融合,得到该用户融 合后的人脸特征,其中,进行加权融合时所述人脸特征的权值与该人脸特征所在人脸特征子集对应的所述指定环境光照特征的所述相似概率正相关;确定所述每个用户的融合后的人脸特征,构成人脸特征集合。其中,所述预设条件可以为相似概率最大的预设数量组(如3组),所述预设数量组根据光照环境特征的数量确定;或者,例如,可以为3组。或者,所述预设条件可以为相似概率大于预设值,如所述预设值为0等。
例如,假设待识别人脸图像匹配的姿态特征为posture1,则根据前述建立的姿态环境光照模型,可以确定姿态特征posture1对应的N种环境光照特征,作为N种指定环境光照特征;之后,分别计算上述N种指定环境光照特征与待识别人脸图像匹配的环境光照特征的相似概率。之后,从预设人脸特征库中与所述姿态特征匹配的人脸特征中,选择所述相似概率大于0的所述指定环境光照特征(如condition1、condition3、condition5)对应的人脸特征,分别建立与每组所述指定环境光照特征对应的人脸特征子集,可以得到3个人脸特征子集S1、S2和S3;对于每个注册用户,将该注册用户在每个所述人脸特征子集中的所述人脸特征(如character1、character2和character3)分别通过公式
Figure PCTCN2020116486-appb-000001
进行加权融合,得到该用户融合后的人脸特征。其中,character(i)表示某一用户在人脸特征子集i中的一组人脸特征,β(i)表示人脸特征子集i对应的所述指定环境光照特征与待识别人脸图像的指定环境光照特征的相似概率;K表示所述选择所述相似概率最大的人脸特征组数。最后,每个注册用户的融合后的人脸特征,构成人脸特征集合。
通过选择对应多种指定环境光照特征的人脸特征进行加权融合,之后再进行人脸识别,相比于只选择最相近的指定环境光照特征对应的人脸特征进行人脸识别,可以提升人脸识别对环境光照的鲁棒性。
本申请的一些实施例中,通过以下公式计算待识别人脸图像匹配的环境光照特征和任意一指定环境光照特征的相似概率:
相似概率=权重*G(x i,e),其中,
Figure PCTCN2020116486-appb-000002
x是待识别人脸图像的人脸特征,x i是当前指定环境光照特征。本申请的一些实施例中,还可以通过其他方法计算相似概率。所述待识别人脸图像匹配的环境光照特征与指定环境光照特征的相似概率与该指定环境光照特征的权重正相关,与待识别人脸图像匹配的环境光照特征和指定环境光照特征之间的距离正相关。
本申请的一些实施例中,在建立姿态环境光照模型时,每个环境光照特征的权重将设置相等的初始值,姿态环境光照模型中每个环境光照特征的权重是动态变化的。如图2所示,所述从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合之后,还包括:步骤140。
步骤140,根据所述环境光照特征,更新用于计算所述姿态环境光照模型中各所述指定环境光照特征与所述环境光照特征的相似度的权重。其中,更新所述指定环境光照特征与所述环境光照特征的相似度的权重的步骤可以在从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合的步骤之后,或者将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获取所述待识别人脸图像的人脸识别结果的步骤之后执行。
本申请的一些实施例中,所述根据所述环境光照特征,更新用于计算所述姿态环境光照模型中各所述指定环境光照特征与所述环境光照特征的相似度的权重,包括:根据所述姿态环境光照模型中所述姿态特征对应的每组指定环境光照特征当前的权重,分别计算所述待识别人脸图像匹配的所述环境光照特征与每组所述指定环境光照特征的相似概率;根据所述相似概率,更新所述指定环境光照特征当前的权重,使得每组所述指定环境光照特征更新后的权重和该组指定环境光照特征与所述环境光照特征的相似概率正相关。本申请的一些实施例中,通过将指定环境光照特征当前权重与待识别人脸图像匹配的环境光照特征与该指定环境光照特征的相似距离求和再归一化的方法,更新各指定环境光照特征的权重。例如,首先通过公式:更新权重wxi =wxi的当前权重+G(xi,e)计算每个指定环境光照特征的候选更 新权重,其中,G(xi,e)表示指定环境光照特征xi相对于待识别人脸图像匹配的环境光照特征的高斯分布;然后,对每个指定环境光照特征的更新权重归一化,得到每个指定环境光照特征的更新后的权重。归一化的方式例如:指定环境光照特征xi的权重=wxi’/(wx1’+wx2’+…+wxi’+wxN’),其中x1到xN对应同一组姿态特征的N组指定环境光照特征。
由前述计算相似概率的方法可知,通过调整某一环境光照特征的权重,可以影响该环境光照特征与待识别人脸图像匹配的环境光照特征的相似概率,从而进一步影响基于该相似概率选择的人脸特征集合。例如,通过对早晨采集的人脸图像p1进行人脸识别,根据早晨的环境光照特征更新姿态环境光照模型中各种环境光照特征的权重,特征更新后,与早晨的环境光照条件相同或相近的指定环境光照特征对应的人脸特征将有更大概率被选择用来与人脸图像p1进行人脸匹配,从而可以提升早晨采集的各人脸图像的人脸识别的准确性。同理,在傍晚时,通过对傍晚采集的人脸图像p2进行人脸识别,根据傍晚的环境光照特征更新姿态环境光照模型中各种环境光照特征的权重,特征更新后,与傍晚的环境光照条件相同或相近的指定环境光照特征对应的人脸特征将有更大概率被选择用来与人脸图像p2进行人脸匹配,从而可以提升傍晚采集的各人脸图像的人脸识别的准确性。
步骤130,将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获取所述待识别人脸图像的人脸识别结果。
在确定了与当前姿态特征和环境光照特征匹配的人脸特征集合之后,可以通过分别计算所述待识别人脸图像匹配的所述人脸特征和所述人脸特征集合中的每组人脸特征的相似度,对所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对。之后,根据计算得到的相似度确定与所述待识别人脸图像匹配的所述人脸特征库中的人脸特征,至此,可以进一步确定待识别人脸图像的身份信息。
本申请实施例公开的人脸识别方法,通过确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获取所述待识别人脸图像的人脸识别结果,有助于提升人脸识别的准确度。本申请实施例公开的人脸识别方法,通过选择与待识别人脸图像的姿态和环境光照条件相同或相近的注册人脸图像进行比对,可以减少由 于姿态和环境光照的不同人脸比对带来的影响,能够应对复杂的环境光照变化和姿态变化,从而提升实际场景的人脸识别准确度。
另一方面,本申请通过预先建立姿态环境光照模型,并对姿态环境光照模型中包括的各种环境光照特征与当前环境光照特征的相似概率计算权重,进行动态调整,使得人脸识别过程能够动态学习实时环境光照条件,进一步提升了人脸识别准确度。
实施例二
本申请实施例公开的一种人脸识别装置,如图3所示,所述装置包括:
特征确定模块310,用于确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;
人脸特征集合构成模块320,用于从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;
人脸识别模块330,用于将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获得所述待识别人脸图像的人脸识别结果。
本申请的一些实施例中,如图4所示,所述装置还包括:
权重更新模块340,用于根据所述环境光照特征,更新用于计算所述姿态环境光照模型中各所述指定环境光照特征与所述环境光照特征的相似度的权重。
本申请的一些实施例中,所述根据所述环境光照特征,更新用于计算所述姿态环境光照模型中各所述指定环境光照特征与所述环境光照特征的相似度的权重,包括:
根据所述姿态环境光照模型中所述姿态特征对应的每组指定环境光照特征当前的权重,分别计算所述待识别人脸图像匹配的所述环境光照特征与每组所述指定环境光照特征的相似概率;
根据所述相似概率,更新所述指定环境光照特征当前的权重,使得每组所述指定环境光照特征更新后的权重和该组指定环境光照特征与所述环境光照特征的相似概率正相关。
本申请的一些实施例中,所述特征确定模块310进一步用于:
通过预设人脸姿态识别模型确定待识别人脸图像匹配的姿态特征和人脸关键点信息,以及,通过预设人脸特征提取模型确定所述待识别人脸图像 中的人脸特征;
根据所述人脸关键点信息计算区域HSV颜色直方图,确定所述待识别人脸图像的环境光照特征。
本申请的一些实施例中,所述人脸特征集合构成模块320进一步用于:
将预设姿态环境光照模型中所述姿态特征对应的每组环境光照特征,分别作为指定环境光照特征;
根据每组所述指定环境光照特征当前的权重,分别计算所述环境光照特征与每组所述指定环境光照特征的相似概率;
确定最大的所述相似概率对应的一组所述指定环境光照特征,作为当前环境光照特征;
从预设人脸特征库中选择与所述姿态特征和所述当前环境光照特征匹配的人脸特征,构成人脸特征集合。
本申请的另一些实施例中,所述人脸特征集合构成模块320进一步用于:
将预设姿态环境光照模型中所述姿态特征对应的每组环境光照特征,分别作为指定环境光照特征;
根据每组所述指定环境光照特征当前的权重,分别计算所述环境光照特征与每组所述指定环境光照特征的相似概率;
从预设人脸特征库中与所述姿态特征匹配的人脸特征中,选择所述相似概率符合预设条件的所述指定环境光照特征对应的人脸特征,分别建立与每组所述指定环境光照特征对应的人脸特征子集;
对于每个用户,将该用户在每个所述人脸特征子集中的所述人脸特征分别进行加权融合,得到该用户融合后的人脸特征,其中,进行加权融合时所述人脸特征的权值与该人脸特征所在人脸特征子集对应的所述指定环境光照特征的所述相似概率正相关;
确定所述每个用户的融合后的人脸特征,构成人脸特征集合。
本申请的另一些实施例中,如图4所示,所述装置还包括:
人脸特征库构建模块350,用于构建人脸特征库,所述人脸特征库中包括与预设姿态特征和预设环境光照特征对应的人脸特征。
本申请的一些实施例中,所述人脸特征库构建模块350进一步用于:
获取注册人脸的正面图像;
通过对所述正面图像进行基于预设姿态和预设环境光照条件的三维重 建,获得所述正面图像对应的各预设姿态、不同环境光照条件下的人脸图像;
获取所述注册人脸的各预设姿态、不同环境光照条件下的每张所述人脸图像的一组人脸特征;
根据获取的所述注册人脸的各预设姿态、不同环境光照条件下的每张所述人脸图像的人脸特征,构建人脸特征库。
本申请实施例公开的人脸识别装置,用于实现本申请实施例一中所述的人脸识别方法,装置的各模块的具体实施方式不再赘述,可参见方法实施例相应步骤的具体实施方式。
本申请实施例公开的人脸识别装置,通过确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获取所述待识别人脸图像的人脸识别结果,有助于提升人脸识别的准确度。本申请实施例公开的人脸识别装置,通过选择与待识别人脸图像的姿态和环境光照条件相同或相近的注册人脸图像进行比对,可以减少由于姿态和环境光照的不同人脸比对带来的影响,能够应对复杂的环境光照变化和姿态变化,从而提升实际场景的人脸识别准确度。
另一方面,本申请通过预先建立姿态环境光照模型,并对姿态环境光照模型中包括的各种环境光照特征与当前环境光照特征的相似概率计算权重进行动态调整,使得人脸识别过程能够动态学习实时环境光照条件,进一步提升了人脸识别准确度。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上对本申请公开的一种人脸识别方法及装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其一种核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或 者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的电子设备中的一些或者全部部件的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
例如,图5示出了可以实现根据本申请的方法的电子设备。所述电子设备可以为PC机、移动终端、个人数字助理、平板电脑等。该电子设备传统上包括处理器520和存储器510及存储在所述存储器510上并可在处理器520上运行的程序代码530,所述处理器520执行所述程序代码530时实现上述实施例中所述的方法。所述存储器510可以为计算机程序产品或者计算机可读介质。存储器510可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器510具有用于执行上述方法中的任何方法步骤的计算机程序的程序代码530的存储空间5101。例如,用于程序代码530的存储空间5101可以包括分别用于实现上面的方法中的各种步骤的各个计算机程序。所述程序代码530为计算机可读代码。这些计算机程序可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。所述计算机程序包括计算机可读代码,当所述计算机可读代码在电子设备上运行时,导致所述电子设备执行根据上述实施例的方法。
本申请实施例还公开了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请实施例一所述的人脸识别方法的步骤。
这样的计算机程序产品可以为计算机可读存储介质,该计算机可读存储 介质可以具有与图5所示的电子设备中的存储器510类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩存储在所述计算机可读存储介质中。所述计算机可读存储介质通常为如参考图6所述的便携式或者固定存储单元。通常,存储单元包括计算机可读代码530’,所述计算机可读代码530’为由处理器读取的代码,这些代码被处理器执行时,实现上面所描述的方法中的各个步骤。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本申请的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
在此处所公开的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本申请可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (12)

  1. 一种人脸识别方法,包括:
    确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;
    从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;
    将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获得所述待识别人脸图像的人脸识别结果。
  2. 根据权利要求1所述的方法,所述从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合的步骤之后,还包括:
    根据所述环境光照特征,更新用于计算所述姿态环境光照模型中各所述指定环境光照特征与所述环境光照特征的相似度的权重。
  3. 根据权利要求2所述的方法,所述根据所述环境光照特征,更新用于计算所述姿态环境光照模型中各所述指定环境光照特征与所述环境光照特征的相似度的权重的步骤,包括:
    根据所述姿态环境光照模型中所述姿态特征对应的每组指定环境光照特征当前的权重,分别计算所述待识别人脸图像匹配的所述环境光照特征与每组所述指定环境光照特征的相似概率;
    根据所述相似概率,更新所述指定环境光照特征当前的权重,使得每组所述指定环境光照特征更新后的权重和该组指定环境光照特征与所述环境光照特征的相似概率正相关。
  4. 根据权利要求1至3任一项所述的方法,所述确定待识别人脸图像的姿态特征、环境光照特征和人脸特征的步骤,包括:
    通过预设人脸姿态识别模型确定待识别人脸图像匹配的姿态特征和人脸关键点信息,以及,通过预设人脸特征提取模型确定所述待识别人脸图像中的人脸特征;
    根据所述人脸关键点信息计算区域HSV颜色直方图,确定所述待识别人脸图像的环境光照特征。
  5. 根据权利要求1至3任一项所述的方法,所述从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合的步骤,包括:
    将预设姿态环境光照模型中所述姿态特征对应的每组环境光照特征,分别作为指定环境光照特征;
    根据每组所述指定环境光照特征当前的权重,分别计算所述环境光照特征与每组所述指定环境光照特征的相似概率;
    确定最大的所述相似概率对应的一组所述指定环境光照特征,作为当前环境光照特征;
    从预设人脸特征库中选择与所述姿态特征和所述当前环境光照特征匹配的人脸特征,构成人脸特征集合。
  6. 根据权利要求1至3任一项所述的方法,所述从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合的步骤,包括:
    将预设姿态环境光照模型中所述姿态特征对应的每组环境光照特征,分别作为指定环境光照特征;
    根据每组所述指定环境光照特征当前的权重,分别计算所述环境光照特征与每组所述指定环境光照特征的相似概率;
    从预设人脸特征库中与所述姿态特征匹配的人脸特征中,选择所述相似概率符合预设条件的所述指定环境光照特征对应的人脸特征,分别建立与每组所述指定环境光照特征对应的人脸特征子集;
    对于每个用户,将该用户在每个所述人脸特征子集中的所述人脸特征分别进行加权融合,得到该用户融合后的人脸特征,其中,进行加权融合时所述人脸特征的权值与该人脸特征所在人脸特征子集对应的所述指定环境光照特征的所述相似概率正相关;
    确定所述每个用户的融合后的人脸特征,构成人脸特征集合。
  7. 根据权利要求1至3任一项所述的方法,所述从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合的步骤之前,还包括:构建人脸特征库,所述人脸特征库中包括与预设姿态特征和预设环境光照特征对应的人脸特征。
  8. 根据权利要求7所述的方法,所述构建人脸特征库的步骤,包括:
    获取注册人脸的正面图像;
    通过对所述正面图像进行基于预设姿态和预设环境光照条件的三维重建,获得所述正面图像对应的各预设姿态、不同环境光照条件下的人脸图像;
    获取所述注册人脸的各预设姿态、不同环境光照条件下的每张所述人脸图像的一组人脸特征;
    根据获取的所述注册人脸的各预设姿态、不同环境光照条件下的每张所述人脸图像的人脸特征,构建人脸特征库。
  9. 一种人脸识别装置,包括:
    特征确定模块,用于确定待识别人脸图像的姿态特征、环境光照特征和人脸特征;
    人脸特征集合构成模块,用于从预设人脸特征库中筛选出与所述待识别人脸图像的姿态特征和环境光特征相符的至少一人脸特征,构成人脸特征集合;
    人脸识别模块,用于将所述待识别人脸图像的人脸特征与所述人脸特征集合中的各人脸特征分别进行比对,获得所述待识别人脸图像的人脸识别结果。
  10. 一种电子设备,包括存储器、处理器及存储在所述存储器上并可在处理器上运行的程序代码,所述处理器执行所述程序代码时实现权利要求1至8任意一项所述的人脸识别方法。
  11. 一种计算机可读存储介质,其上存储有程序代码,该程序代码被处理器执行时实现权利要求1至8任意一项所述的人脸识别方法的步骤。
  12. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备上运行时,导致所述电子设备执行根据权利要求1至8中的任意一项所述的人脸识别方法。
PCT/CN2020/116486 2020-02-24 2020-09-21 人脸识别 WO2021169257A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010113903.6 2020-02-24
CN202010113903.6A CN111414803A (zh) 2020-02-24 2020-02-24 人脸识别方法、装置、电子设备

Publications (1)

Publication Number Publication Date
WO2021169257A1 true WO2021169257A1 (zh) 2021-09-02

Family

ID=71494202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116486 WO2021169257A1 (zh) 2020-02-24 2020-09-21 人脸识别

Country Status (2)

Country Link
CN (1) CN111414803A (zh)
WO (1) WO2021169257A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778487A (zh) * 2023-08-11 2023-09-19 四川汉唐云分布式存储技术有限公司 一种在监控界面为个人头部增加独特标识符的系统和方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597891B (zh) * 2020-12-22 2022-05-06 深圳市海威达科技有限公司 人脸识别图像处理方法及系统
CN112818909A (zh) * 2021-02-22 2021-05-18 Oppo广东移动通信有限公司 图像更新方法、装置、电子设备及计算机可读介质
CN112862821A (zh) * 2021-04-01 2021-05-28 中国工商银行股份有限公司 基于图像处理的漏水检测方法、装置、计算设备和介质
CN113657187A (zh) * 2021-07-26 2021-11-16 浙江大华技术股份有限公司 一种脸部识别方法、设备和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159015A (zh) * 2007-11-08 2008-04-09 清华大学 一种二维人脸图像的识别方法
CN102799871A (zh) * 2012-07-13 2012-11-28 Tcl集团股份有限公司 一种人脸跟踪识别的方法
CN102938065A (zh) * 2012-11-28 2013-02-20 北京旷视科技有限公司 基于大规模图像数据的人脸特征提取方法及人脸识别方法
US20180046854A1 (en) * 2015-02-16 2018-02-15 University Of Surrey Three dimensional modelling
CN110647782A (zh) * 2018-06-08 2020-01-03 北京信息科技大学 三维人脸重建与多姿态人脸识别方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320484B (zh) * 2008-07-17 2012-01-04 清华大学 一种人脸虚图像生成的方法及一种三维人脸识别方法
CN103136533B (zh) * 2011-11-28 2015-11-25 汉王科技股份有限公司 基于动态阈值的人脸识别方法及装置
CN102982321B (zh) * 2012-12-05 2016-09-21 深圳Tcl新技术有限公司 人脸库采集方法及装置
CN104243843B (zh) * 2014-09-30 2017-11-03 北京智谷睿拓技术服务有限公司 拍摄光照补偿方法、补偿装置及用户设备
CN106845385A (zh) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 视频目标跟踪的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159015A (zh) * 2007-11-08 2008-04-09 清华大学 一种二维人脸图像的识别方法
CN102799871A (zh) * 2012-07-13 2012-11-28 Tcl集团股份有限公司 一种人脸跟踪识别的方法
CN102938065A (zh) * 2012-11-28 2013-02-20 北京旷视科技有限公司 基于大规模图像数据的人脸特征提取方法及人脸识别方法
US20180046854A1 (en) * 2015-02-16 2018-02-15 University Of Surrey Three dimensional modelling
CN110647782A (zh) * 2018-06-08 2020-01-03 北京信息科技大学 三维人脸重建与多姿态人脸识别方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778487A (zh) * 2023-08-11 2023-09-19 四川汉唐云分布式存储技术有限公司 一种在监控界面为个人头部增加独特标识符的系统和方法

Also Published As

Publication number Publication date
CN111414803A (zh) 2020-07-14

Similar Documents

Publication Publication Date Title
WO2021169257A1 (zh) 人脸识别
WO2021077984A1 (zh) 对象识别方法、装置、电子设备及可读存储介质
CN106372581B (zh) 构建及训练人脸识别特征提取网络的方法
Cheng et al. Person re-identification by multi-channel parts-based cnn with improved triplet loss function
CN106815566B (zh) 一种基于多任务卷积神经网络的人脸检索方法
Guo et al. Online early-late fusion based on adaptive hmm for sign language recognition
Goodfellow et al. Multi-digit number recognition from street view imagery using deep convolutional neural networks
Lin et al. Learning correspondence structures for person re-identification
Feng et al. Triplet distillation for deep face recognition
CN109522945B (zh) 一种群体情感识别方法、装置、智能设备及存储介质
Aly et al. Indexing in large scale image collections: Scaling properties and benchmark
Satta et al. Fast person re-identification based on dissimilarity representations
CN112395979B (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
CN110188223A (zh) 图像处理方法、装置及计算机设备
CN109740679B (zh) 一种基于卷积神经网络和朴素贝叶斯的目标识别方法
CN109635752B (zh) 人脸关键点的定位方法、人脸图像处理方法和相关装置
CN105005777A (zh) 一种基于人脸的音视频推荐方法及系统
CN105740808B (zh) 人脸识别方法和装置
CN110414550B (zh) 人脸识别模型的训练方法、装置、系统和计算机可读介质
Chen et al. Face recognition using ensemble string matching
CN111401521B (zh) 神经网络模型训练方法及装置、图像识别方法及装置
CN111327949B (zh) 一种视频的时序动作检测方法、装置、设备及存储介质
CN108491754A (zh) 一种基于骨骼特征的动态表示和匹配的人体行为识别方法
CN109934114A (zh) 一种手指静脉模板生成与更新算法及系统
Liu et al. An end-to-end deep model with discriminative facial features for facial expression recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20922048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20922048

Country of ref document: EP

Kind code of ref document: A1