WO2022142375A1 - 一种人脸识别方法、装置及电子设备 - Google Patents

一种人脸识别方法、装置及电子设备 Download PDF

Info

Publication number
WO2022142375A1
WO2022142375A1 PCT/CN2021/113209 CN2021113209W WO2022142375A1 WO 2022142375 A1 WO2022142375 A1 WO 2022142375A1 CN 2021113209 W CN2021113209 W CN 2021113209W WO 2022142375 A1 WO2022142375 A1 WO 2022142375A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
image group
recognized
face
image
Prior art date
Application number
PCT/CN2021/113209
Other languages
English (en)
French (fr)
Inventor
李林峰
黄海荣
Original Assignee
亿咖通(湖北)技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 亿咖通(湖北)技术有限公司 filed Critical 亿咖通(湖北)技术有限公司
Publication of WO2022142375A1 publication Critical patent/WO2022142375A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, and in particular, to a face recognition method, device and electronic device.
  • an on-board camera can usually be installed in the vehicle.
  • the on-board camera can collect the facial images of the drivers and passengers in the vehicle in real time, so that the identities of the drivers and passengers can be recognized according to the collected face images, so that the identities of the drivers and passengers can be identified according to the collected face images. Provides personalized response actions to enhance the user experience for drivers and passengers.
  • the vehicle-mounted camera collects the face images of the driver and occupant in real time, and performs face recognition on each frame of the face image.
  • the face is talking to people, facing the car window and other positional states, so that the face image collected by the vehicle camera cannot capture the front face image of the driver and passenger, which in turn leads to the failure of the identification of the driver and passenger; and
  • the vehicle camera can collect the front face image of the driver and passenger, so that the identity of the driver and passenger can be successfully identified.
  • the driver and passenger since the driver and passenger may be in different position states, the above situation may occur many times.
  • one or more embodiments of the present invention provide a face recognition method, the method comprising:
  • each image included in each of the reference face image groups corresponds to the same personnel information
  • the first image group is: a reference face image group whose difference value from the to-be-recognized face image is less than a first threshold value;
  • the difference value of the second image group and the face image to be recognized is not less than the first image group a threshold value, and a reference face image group less than a second threshold value, wherein the first threshold value is less than the second threshold value;
  • the identity of the person corresponding to the face image to be recognized is determined based on the person information corresponding to the second image group.
  • the step of calculating the difference value between the to-be-identified face image and each group of reference face images includes:
  • difference values between the face image to be recognized and each of the reference face image groups are calculated respectively.
  • the step of calculating the similarity between the to-be-recognized face image and each of the reference face image groups includes:
  • the method further includes:
  • the first image group determines whether the difference between the face image to be recognized and the first image group is smaller than a third threshold value, where the third threshold value is smaller than the first image group threshold value;
  • the reference face image group corresponding to the first image group is updated by using the face image to be recognized.
  • the step of updating the reference face image group corresponding to the first image group by using the to-be-identified face image includes:
  • the step of updating the reference face image group corresponding to the first image group by using the to-be-identified face image includes:
  • the first image group includes a reference face image group and a dynamic face image group, wherein the reference face image group includes at least one reference face image, wherein the using
  • the step of updating the reference face image group corresponding to the first image group by the face image to be recognized includes:
  • the reference face image group is maintained, and the dynamic face image group is updated with the to-be-recognized face image.
  • the step of judging whether the person corresponding to the face image to be recognized is a tracked person includes:
  • determining a face image to be tracked from a third image group wherein the third image group is formed by a face image acquired within a preset time range before acquiring the to-be-recognized face image;
  • the fourth image group and the second image group are the same reference face image group, it is determined that the person corresponding to the to-be-recognized face image is the tracked person.
  • one or more embodiments of the present invention provide a face recognition device, the device comprising:
  • the difference value calculation module is used to obtain the face image to be identified, and calculate the difference value between the face image to be identified and each reference face image group, wherein each image included in each of the reference face image groups corresponds to information on the same person;
  • a first image group judgment module configured to judge whether there is a first image group according to the difference value, wherein, if the first image group exists, the first image group judgment module triggers the first result determination module, otherwise, Triggering the second image group judgment module, wherein the first image group is: a reference face image group whose difference value from the to-be-recognized face image is less than a first threshold value;
  • the first result determination module is configured to determine the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the first image group;
  • the second image group judging module is configured to judge whether there is a second image group according to the difference value, wherein if the second image group exists, the second image group judging module triggers the tracking person judging module;
  • the tracking person judging module is used for judging whether the person corresponding to the face image to be recognized is a tracked person, wherein, if it is judged that the person corresponding to the face image to be recognized is a tracked person, the tracking person determines whether the person corresponding to the face image to be recognized is a tracked person the module triggers the second result determination module;
  • the second result determination module is configured to determine the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the second image group.
  • the difference value calculation module when performing the calculation of the difference value between the to-be-identified face image and each group of reference face images, is configured to:
  • difference values between the face image to be recognized and each of the reference face image groups are calculated respectively.
  • the difference value calculation module when performing the calculation of the similarity between the face image to be recognized and each of the reference face image groups, is configured to:
  • the first image group judgment module is further configured to:
  • the first image group determines whether the difference between the face image to be recognized and the first image group is smaller than a third threshold value, where the third threshold value is smaller than the first image group threshold value;
  • the reference face image group corresponding to the first image group is updated by using the face image to be recognized.
  • the first image group judging module when performing the updating of the reference face image group corresponding to the first image group using the to-be-identified face image, is configured to:
  • the first image group judging module when performing the updating of the reference face image group corresponding to the first image group using the to-be-identified face image, is configured to:
  • the first image group includes a reference face image group and a dynamic face image group, wherein the reference face image group includes at least one reference face image, wherein the first image group judgment module is used for:
  • the reference face image group is maintained, and the dynamic face image group is updated with the to-be-recognized face image.
  • the tracking person judging module when performing the judging whether the person corresponding to the face image to be recognized is the person being tracked, is configured to:
  • determining a face image to be tracked from a third image group wherein the third image group is formed by a face image acquired within a preset time range before acquiring the to-be-recognized face image;
  • the fourth image group and the second image group are the same reference face image group, it is determined that the person corresponding to the to-be-recognized face image is the tracked person.
  • one or more embodiments of the present invention provide an electronic device comprising a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface and the memory are communicated through all The communication bus completes mutual communication;
  • the memory is used to store computer programs
  • the processor is configured to execute the program stored in the memory, so as to implement the steps of any of the face recognition methods provided in the first aspect.
  • one or more embodiments of the present invention provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned first Any of the face recognition methods provided by the aspect.
  • one or more embodiments of the present invention provide a computer program product comprising instructions, which, when executed on a computer, cause the computer to execute any of the face recognition methods provided in the first aspect above.
  • FIG. 1 is a schematic flowchart of a face recognition method provided according to one or more embodiments of the present invention
  • Fig. 2 (a) is a schematic diagram of detecting that the number of key points in a human face is 5 according to one or more embodiments of the present invention
  • FIG. 2(b) is a schematic diagram of detecting that the number of key points in a human face is 68 points according to one or more embodiments of the present invention
  • Fig. 2(c) is a schematic diagram of the key point representation of Fig. 2(b);
  • FIG. 3 is a schematic diagram of a first eigenvector and a certain second eigenvector located in a two-dimensional coordinate system according to one or more embodiments of the present invention
  • Fig. 4 (a) is according to one or more embodiments of the present invention, the face area in the face image to be recognized is a schematic diagram of a non-vertical area;
  • Fig. 4 (b) is the schematic diagram that the face region in the face image to be recognized after the alignment obtained after aligning Fig. 4 (a) is a vertical region;
  • FIG. 5 is a schematic structural diagram of a face recognition device according to one or more embodiments of the present invention.
  • FIG. 6 is a schematic structural diagram of an electronic device according to one or more embodiments of the present invention.
  • One or more embodiments of the present invention provide a face recognition method. This method can be applied to any scene that requires continuous recognition of face images. For example, during the driving process of the vehicle, the face images collected by the on-board cameras are used to continuously identify the drivers and passengers; for another example, the road traffic cameras are used to continuously identify the face images of the pedestrians.
  • the method can be applied to any type of electronic device, that is, the execution body of the method can be any type of electronic device, for example, a car camera, a mobile phone, a notebook computer, a desktop computer, and the like.
  • the execution body of the method can be any type of electronic device, for example, a car camera, a mobile phone, a notebook computer, a desktop computer, and the like.
  • electronic equipment for the sake of clarity, hereinafter referred to as electronic equipment.
  • the electronic devices for electronic devices that can directly collect face images, for example, a vehicle-mounted camera, etc., it can directly execute the method provided by the embodiment of the present invention to realize continuous recognition of the collected face images, and can also
  • the collected face image is sent to other electronic devices (for example, desktop computers, etc.) that are used to execute the methods provided by the embodiments of the present invention, so that the other electronic devices execute the methods provided by the embodiments of the present invention, and realize the acquisition of the collected face images.
  • the embodiments of the present invention do not limit the applicable scenarios and execution subjects of the face recognition method provided by the embodiments of the present invention.
  • the first image group is a reference face image group whose difference value from the face image to be recognized is less than a first threshold value
  • the first image group determines whether there is a second image group according to the difference value, wherein the difference value between the second image group and the face image to be recognized is not less than the first threshold value and less than the second threshold
  • the identity of the person corresponding to the face image to be recognized is determined based on the person information corresponding to the second image group.
  • the recognition result of the image is obtained, it can be further determined whether there is a second image group whose difference value from the face image to be recognized is not less than the first threshold value and less than the second threshold value in each reference face image group. Furthermore, when the judgment result is that there is a second image group, it can be further judged whether the person corresponding to the face image to be recognized is a tracked person, so that when it is judged that the person is a tracked person, it can be determined to be recognized.
  • the person corresponding to the face image is the person corresponding to the second image group. Therefore, based on the personnel information corresponding to the second image group, the identity of the person corresponding to the face image to be recognized can be determined, so as to realize the success of the face image to be recognized. identify.
  • the face image to be recognized is the person to be tracked, before the face image to be recognized, other collected face images of the person can be successfully recognized. Therefore, in the recognition process of the face image to be recognized, even if the difference value between the face image to be recognized and each reference face image group is not less than the first threshold value, then, when there is the above-mentioned second image group, It can be explained that the person corresponding to the face image to be recognized is the person corresponding to the tracked second image group, and the face image to be recognized can be a non-frontal face image of the corresponding person.
  • the person corresponding to the face image to be recognized is the person being tracked, even if the face image to be recognized used for face recognition is a non-frontal face image of the person, the The successful identification of the person can be realized according to the double restriction conditions of the first threshold value and the second threshold value. Therefore, during the continuous recognition process of the person corresponding to the face image to be recognized, the possibility of alternately occurrence of recognition success and recognition failure caused by the change of the position state of the person can be reduced.
  • the above-mentioned method of recognizing the face image to be recognized according to the double restriction conditions of the first threshold value and the second threshold value may be called a fallback threshold mechanism.
  • the treatment can be realized.
  • Successful recognition of face images Therefore, during the continuous recognition process of the person corresponding to the face image to be recognized, the possibility of alternately occurrence of recognition success and recognition failure caused by the change of the position state of the person can be reduced. In this way, the robustness of face recognition can be improved, so as to continuously provide personalized response actions for the person corresponding to the face image to be recognized, and improve the user experience of the person.
  • FIG. 1 is a schematic flowchart of a face recognition method according to one or more embodiments of the present invention. As shown in Figure 1, the method may include the following steps:
  • S101 Acquire a face image to be identified, and calculate the difference value between the face image to be identified and each reference face image group;
  • each image included in each reference face image group corresponds to the same personnel information
  • a reference face image group includes multiple face images corresponding to the same person.
  • the reference face of the person can be specified in each reference face image group in advance.
  • image where the reference face image may be an image with high definition and can more accurately and comprehensively characterize the face features of the user, for example, may be the face image on the person's ID card, or for example , which can be the face image on the person's driver's license, etc.
  • each reference face image group may only include the reference face image of the person corresponding to the reference face image group.
  • the collected face images for identifying the persons may also be able to more accurately characterize the face features of the user.
  • the collected face image is added to the reference face image group corresponding to the person, thereby increasing the number of images in the preset reference face image group corresponding to the person, and the reference face image group is used as the face more accurate when identifying matching criteria.
  • each image in each reference face image group corresponds to the same personnel information, thus, when the to-be-recognized face image is identified, for each reference face image group, the difference between the to-be-recognized face image and the to-be-recognized face image can be directly calculated.
  • the difference value of each reference face image group does not need to calculate the difference value between the face image to be recognized and each reference face image.
  • each of the above-mentioned reference face image groups may be stored in a preset reference image library, so that when the reference image library is started to be constructed, and the face recognition has not yet started, the face recognition process
  • the reference image library At least one face image of the person is stored in the reference image library for each person that can be identified.
  • the face images corresponding to the same person form a reference face image group.
  • the collected face image can be added to the reference face image group corresponding to the face image in the reference image library.
  • the difference value between the to-be-recognized face image and each reference face image group can be further calculated.
  • the above-mentioned face image to be recognized may be a rectangular image area where the face position is located in the person image obtained by performing face detection on the person image collected by the image acquisition device.
  • face detection can be realized by DNN (Deep Neural Network, deep neural network) model, for example, MTCNN (Multi-task Cascaded Convolutional Networks, multi-task convolutional neural network), FisherFace face recognition algorithm, SSD (single shot multibox detector) algorithm, YOLO algorithm, etc.
  • DNN Deep Neural Network, deep neural network
  • MTCNN Multi-task Cascaded Convolutional Networks, multi-task convolutional neural network
  • FisherFace face recognition algorithm FisherFace face recognition algorithm
  • SSD single shot multibox detector
  • YOLO single shot multibox detector
  • the most commonly used deep neural network model for face recognition is the RetinaFace face recognition algorithm.
  • the input of the model is the collected person image
  • the output after detection is the rectangular image area used to represent the location of the face in the person image.
  • a rectangular image region is output for each face region in the person image.
  • the key points in the face can also be detected in the rectangular image area where the face position in the output image of the person is located.
  • the number of the key points can be 5 points or 68 points.
  • the number of detected keypoints is 5 points, including two eyes, nose and two mouth corners.
  • the number of detected key points is 68 points, thus, a more accurate position in the face can be included, wherein Fig. 2(c) is the key point of Fig. 2(b) point representation.
  • the difference value between the face image to be recognized and each reference face image group can represent the difference between the feature of the face image to be recognized and the feature of the face image in the reference face image group, so that the difference value
  • the smaller the difference the smaller the difference between the feature of the face image to be recognized and the feature of the face image in the reference face image group, that is, the feature of the face image to be recognized and the feature in the reference face image group.
  • the higher the similarity of the features of the face images the higher the possibility that the person information corresponding to the face image to be recognized is the person information corresponding to the reference face image group.
  • step S102 Determine whether there is a first image group according to the difference value; if there is a first image group, go to step S103; otherwise, go to step S104.
  • the first image group is a reference face image group whose difference value from the face image to be recognized is smaller than the first threshold value.
  • the difference value between the face image to be recognized and the reference face image group can be determined for each reference face image group Is it less than the first threshold value.
  • the reference face image group can be determined as the first image group. That is, when it is determined that the difference value between the face image to be recognized and a certain reference face image group is less than the first threshold value, it can be determined that there is a first image group, and the first image group is the face to be recognized.
  • the reference face image group whose difference value of the images is less than the first threshold value.
  • S103 Obtain a person identity corresponding to the face image to be recognized based on the person information corresponding to the first image group.
  • the identity of the person corresponding to the face image to be recognized is the identity of the person corresponding to the first image group, that is, It can be explained that the person corresponding to the face image to be recognized and the person corresponding to the first image group are the same person. At this time, it can be determined that the face recognition of the face image to be recognized is successful, and further, based on the personnel information corresponding to the first image group, the identity of the person corresponding to the face image to be recognized can be obtained.
  • the personnel information corresponding to the first image group may include at least one of the following types of information: personnel name, personnel gender, Personnel preferences, personnel origin, personnel habits, etc. Of course, other types of information may also be included.
  • the person information corresponding to the face image to be recognized can be further determined from the person information corresponding to the first image group.
  • All of the personnel information corresponding to the first image group may be used as the personnel information corresponding to the face image to be recognized, or part of the personnel information corresponding to the first image group may be used as the personnel information corresponding to the face image to be recognized.
  • the preferences and riding habits of the person may be selected from the personnel information corresponding to the first image group as the personnel information corresponding to the face image to be recognized.
  • a personalized response operation may be provided for the person corresponding to the person image to be recognized according to the obtained person information.
  • the seat of the person corresponding to the to-be-recognized face image can be adjusted according to the person's preference and the passenger's riding habits in the obtained personnel information corresponding to the to-be-recognized face image. Angle of back recline and recommended radio program for the person.
  • S104 Determine whether there is a second image group according to the difference value; if there is a second image group, perform step S105.
  • the second image group is a reference face image group whose difference value from the face image to be recognized is not less than the first threshold value and less than the second threshold value, wherein the first threshold value is less than the second threshold value value;
  • the difference between the face image to be recognized and each reference face image group is not less than the first threshold value, it means that there is no first image group in each reference face image group, that is, it is considered that there is no difference with the to-be-recognized face image group.
  • the set of reference face images to which face images are specifically matched Furthermore, it can be further determined whether the difference value between the face image to be recognized and each reference face image group is smaller than a second threshold value, where the first threshold value is smaller than the second threshold value.
  • the difference value from the face image to be recognized is not less than the first threshold value, and the reference face image group less than the second threshold value is the second image group.
  • the recognition of the face image to be recognized has failed, and the personnel information corresponding to the face image to be recognized cannot be determined.
  • S105 Determine whether the person corresponding to the face image to be recognized is a tracked person; if it is a tracked person, perform step S106.
  • the person corresponding to the second image group may be the same person as the person corresponding to the face image to be recognized. Furthermore, in order to determine whether the person corresponding to the second image group and the person corresponding to the face image to be recognized are the same person, it may be further determined whether the person corresponding to the face image to be recognized is a tracked person.
  • the image acquisition object of the face image to be recognized is always in the acquired image acquired by the image acquisition device, so that it is determined that the above does not exist Whether the cause of the first image group is due to a change in the position state of the person being tracked.
  • a preset tracking algorithm can be used to determine whether the person corresponding to the face image to be recognized is the person to be tracked, that is, it is determined whether the person corresponding to the face image to be recognized is always in the image collected by the image device. within the image.
  • the above preset tracking algorithms can be SORT (Simple Online and Realtime Tracking, target tracking algorithm) and DeepSORT (Deep Simple Online and Realtime Tracking, multi-target tracking algorithm), of course, the above preset tracking algorithm can also be other Tracking algorithms, it's all reasonable.
  • the detected face image and the pre-stored face image can be compared one by one, when the detected face image and a preset stored face image have a high degree of similarity, it can be determined that the The person corresponding to the face image is the same person as the person corresponding to the pre-stored face image, and since the person corresponding to the pre-stored face image is the person to be tracked, the detected person can be determined.
  • the person corresponding to the face image is the person being tracked.
  • the image acquisition object of the face image to be recognized is always in the acquired image acquired by the image acquisition device, so it can be determined that the above-mentioned first image does not exist
  • the reason for the group is caused by the change of the position state of the person being tracked, and further, it can be explained that the identity of the person corresponding to the face image to be recognized is the identity of the person corresponding to the second image group.
  • the reason why the difference value between the face image to be recognized and the second image group is not less than the above-mentioned first threshold value may be due to the change of the position state of the person corresponding to the face image to be recognized, resulting in the obtained face image to be recognized. It is a non-frontal image of the person, and further, the difference value between the face image to be recognized and the second image group is relatively high, and the difference value caused by this situation is not higher than the above-mentioned second threshold value.
  • the identity of the person corresponding to the face image to be recognized is the identity of the person corresponding to the second image group.
  • the person corresponding to the face image and the person corresponding to the second image group are the same person.
  • it can be determined that the face recognition of the face image to be recognized is successful, and further, based on the personnel information corresponding to the second image group, the identity of the person corresponding to the face image to be recognized can be obtained.
  • the personnel information corresponding to the second image group may include at least one of the following types of information: personnel name, personnel gender, Personnel preferences, personnel origin, personnel habits, etc. Of course, other types of information may also be included.
  • the person information corresponding to the face image to be recognized can be further determined from the person information corresponding to the second image group.
  • all the personnel information corresponding to the second image group can be used as the personnel information corresponding to the face image to be recognized, or part of the information in the personnel information corresponding to the second image group can be used as the personnel information corresponding to the face image to be recognized.
  • the preferences and riding habits of the person may be selected from the personnel information corresponding to the second image group as the personnel information corresponding to the face image to be recognized.
  • a personalized response operation may be provided for the person corresponding to the person image to be recognized according to the obtained person information.
  • the seat of the person corresponding to the to-be-recognized face image can be adjusted according to the person's preference and the passenger's riding habits in the obtained personnel information corresponding to the to-be-recognized face image. Angle of back recline and recommended radio program for the person.
  • the person corresponding to the face image to be recognized is not the person being tracked, it can be determined that the recognition of the face image to be recognized has failed, and the person information corresponding to the face image to be recognized cannot be determined.
  • the fallback threshold mechanism by using the fallback threshold mechanism, through the judgment of the second threshold value, in the case where the recognition result of the face image to be recognized cannot be determined by the first threshold value, the treatment can be realized.
  • Successful recognition of face images Therefore, during the continuous recognition process of the person corresponding to the face image to be recognized, the possibility of alternately occurrence of recognition success and recognition failure caused by the change of the position state of the person can be reduced. In this way, the robustness of face recognition can be improved, so as to continuously provide personalized response actions for the person corresponding to the face image to be recognized, and improve the user experience of the person.
  • the above step S101 may include the following steps 1011-1012.
  • Step 1011 Calculate the similarity between the face image to be recognized and each reference face image group respectively.
  • the similarity between the to-be-recognized face image and the reference face image group can be calculated first, that is, the to-be-recognized face image and each reference face image group obtain a corresponding similarity. Therefore, a plurality of similarities between the face image to be recognized and each reference face image group can be calculated separately.
  • Step 1012 Based on the similarity, calculate the difference values between the face image to be recognized and each reference face image group.
  • step S1011 for each reference face image group, after calculating the similarity between the face image to be identified and the reference face image group in step S1011, based on the similarity, determine the similarity between the face image to be identified and the reference face image group.
  • the difference value of the reference face image group; after calculating the similarity between the face image to be recognized and all reference face image groups in step S1011, the difference between the face image to be recognized and each reference face image group can be determined at the same time. difference value.
  • the difference value between the face image to be recognized and the jth reference image group in each reference face image group may be represented by DIFF j .
  • DIFF j the difference value between the face image to be recognized and the jth reference image group in each reference face image group.
  • m is the number of each reference face image group, that is, m reference face image groups are preset.
  • the difference between the face image to be identified and each reference face image group may be the difference between the preset value and the similarity, that is, the preset value minus the face image to be identified and the reference face
  • the difference obtained from the similarity of the image groups in general, the default value is 1.
  • the above step 1011 may be performed in various ways to obtain the similarity between the face image to be recognized and each reference face image group, which is not specifically limited in this embodiment of the present invention.
  • the above-mentioned step 1011 may include the following steps 1011A-1011C.
  • Step 1011A Extract the first feature vector of the face image to be recognized.
  • Step 1011B Calculate the second feature vector of each reference face image group respectively, wherein, for each reference face image group, the second feature vector of the reference face image group is based on all the people in the reference face image group. The third feature vector of the face image is obtained.
  • the second feature vector of the reference face image group is the first feature vector of the face image included in the reference face image group.
  • Step 1011C Calculate the similarity between the first feature vector of the face image to be identified and the second feature vector of each reference face image group, as the similarity between the face image to be identified and each group of reference face images.
  • the first feature vector of the face image to be recognized can be extracted according to step 1011A and the second feature of each reference face image group can be calculated according to step 1011B vector, and by calculating the similarity between the first feature vector and the second feature vector of each reference face image group, the similarity between the face image to be recognized and each reference face image group is obtained.
  • the calculated similarity between the first feature vector of the face image to be recognized and the second feature vector of each reference face image group is the similarity between the face image to be recognized and each reference face image group.
  • the above-mentioned first feature vector and each second feature vector are of the same dimension and the same numerical type. vector. Furthermore, since the second feature vector of each reference face image group is obtained based on the third feature vector of all the face images in the reference face image group, the third feature vector of all the images in each reference face image group The dimensions and numerical types of the three eigenvectors are the same as those of the first eigenvector and each of the second eigenvectors described above.
  • the above-mentioned first image vector of the face image to be recognized and the third feature vector of all face images in each reference face image group may be extracted from the face image using a feature extraction network.
  • high-dimensional vector For example, the above-mentioned first feature vector of the face image to be recognized, and the third feature vector of all the face images in each reference face image group can be obtained from the face image to be recognized and each reference face image using the InsightFace network.
  • the above feature extraction network can also be other networks, such as FaceNet, SphereFace, CosFace, etc., and the dimension and numerical type of the feature vector extracted by different feature extraction networks can be different.
  • the similarity between the first feature vector and each second feature vector of the above-mentioned face image to be recognized can be calculated in various ways.
  • the similarity between the first feature vector and each second feature vector can be obtained by calculating the mean square error, Euclidean distance, etc. of the first feature vector and the second feature vector of each reference face image group, that is, The similarity between the face image to be recognized and each reference face image group.
  • is the mean square error of the first eigenvector A and the second eigenvector B
  • n is the dimension of the first eigenvector A and the second eigenvector B
  • a i is the ith element value in the first eigenvector A
  • B i is the i-th element value in the second feature vector B, 1 ⁇ i ⁇ n.
  • d is the Euclidean distance between the first eigenvector A and the second eigenvector B
  • n is the dimension of the first eigenvector A and the second eigenvector B
  • a i is the ith element value in the first eigenvector A
  • B i is the i-th element value in the second feature vector B, 1 ⁇ i ⁇ n.
  • the similarity between the first feature vector and each second feature vector may be obtained by calculating the cosine value of the included angle between the first feature vector and each second feature vector. That is to say, the calculated cosine value of the angle between the first eigenvector and each second eigenvector can be used as the similarity between the first eigenvector and each second eigenvector.
  • the first eigenvector and a certain second eigenvector are the vectors A and B in Figure 3, respectively, then the cosine of the angle between the vectors A and B is:
  • cos( ⁇ ) is the cosine value of the angle between the vectors A and B
  • a x is the abscissa of the vector A in the two-dimensional image coordinate system
  • a y is the ordinate of the vector A in the two-dimensional image coordinate system
  • B x is the abscissa of the vector B in the two-dimensional image coordinate system
  • By is the ordinate of the vector B in the two-dimensional image coordinate system.
  • the cosine value of the angle between the vectors A and B can also be:
  • n is the dimension of vectors A and B
  • a i is the ith element value in vector A
  • B i is the ith element value in vector B, 1 ⁇ i ⁇ n.
  • the reference face image group corresponding to the first image group may be updated by using the face image to be recognized.
  • the face recognition method provided by the above embodiments of the present invention may further include the following steps 1-2.
  • Step 1 when there is a first image group, determine whether the difference between the face image to be recognized and the first image group is less than a third threshold value, where the third threshold value is less than the first threshold value; if so, Go to step 2.
  • Step 2 Update the reference face image group corresponding to the first image group by using the face image to be recognized.
  • the difference value between the face image to be recognized and the first image group is smaller than a third threshold value.
  • the third threshold value is smaller than the first threshold value
  • the difference value between the face image to be recognized and the first image group is smaller than the third threshold value
  • the first image group can be updated by using the face image to be recognized, that is, the reference face image group corresponding to the first image pixel is updated. In this way, the face image to be recognized can become the image used in the subsequent face recognition process.
  • the second feature vector of each reference face image group is obtained based on the third feature vector of all face images in the reference face image group, therefore, when using the face image to be recognized to update the first image group Then, the second feature vector of the updated first image group needs to be updated, that is, the second feature vector of the reference face image group corresponding to the updated first image group is updated.
  • the third feature vector of all face images in the updated first image group may be determined first, and then based on the updated first image group of all face images
  • the third feature vector of the face image calculates the updated second feature vector of the first image group.
  • the updated second feature vector of the first image group may be an average value of the third feature vectors of all face images in the updated first image group.
  • the number of images included in each reference face image group should not be too large.
  • updating the reference face image group corresponding to the first image group by using the face image to be recognized may include the following steps 21-22:
  • Step 21 Determine whether the number of face images included in the reference face image group corresponding to the first image group reaches a preset number; if not, go to step 22.
  • Step 22 Add the face image to be recognized to the reference face image group corresponding to the first image group.
  • the reference face image group corresponding to the first image group may include a preset number of face images, and when the number of face images included in the reference face image group corresponding to the first image group is When the preset number is not reached, it means that a new face image can be added to the reference face image group corresponding to the first image group without causing serious cumulative errors when calculating the second feature vector, and the added new face image
  • the identity of the person corresponding to the face image is the same as the identity of the person corresponding to the first image group.
  • the reference face image group corresponding to the first image group may include a preset number of face images, and when the face images included in the reference face image group corresponding to the first image group are When the number does not reach the preset number, it means that a new face image can be added to the reference face image group corresponding to the first image group, and the identity of the person corresponding to the added new face image is the same as the first image. The identity of the personnel corresponding to the group is the same.
  • the to-be-recognized face image can be The face image is added to the first image group, so that the reference face image group corresponding to the first image group is updated by using the face image to be recognized.
  • the number of face images included in the first image group is not greater than the preset number.
  • the specific value of the preset number may be determined based on the error accumulation rule and the accuracy requirement for face recognition.
  • the embodiment of the present invention does not limit the specific value of the preset number, for example, the preset number Can be 2, 3, etc.
  • using the face image to be recognized to update the reference face image group corresponding to the first image group may include the following steps 21 and 23-25:
  • Step 21 Determine whether the number of face images included in the reference face image group corresponding to the first image group reaches a preset number; if so, go to step 23.
  • Step 23 Obtain the first difference value between each face image in the reference face image group corresponding to the first image group and the first image group, and obtain the second difference value between the face to be recognized and the first image group.
  • Step 24 Determine whether there is a face image with the first difference value greater than the second difference value in the reference face image group corresponding to the first image group; if so, go to step 25.
  • Step 25 Delete the face image corresponding to the largest first difference value in the reference face image group corresponding to the first image group, and add the to-be-recognized face image to the reference face image group corresponding to the first image group.
  • the reference face image group corresponding to the first image group may include a preset number of face images, and when the number of face images included in the reference face image group corresponding to the first image group is When the preset number is reached, it means that if the reference face image group corresponding to the first image group is updated by using the face image to be recognized, a certain face image in the reference face image group corresponding to the first image group needs to be replaced. is a face image to be recognized, so as to ensure that the number of face images in the reference face image group corresponding to the updated first image group does not exceed a preset number.
  • the face image to be recognized cannot be directly added to the reference face image group corresponding to the first image group.
  • the face image of the face image to be replaced in the image group is replaced with the face image to be recognized, that is, the face image to be replaced in the reference face image group corresponding to the first image group is deleted, and
  • the face image to be recognized is added to the reference face image group corresponding to the deleted first image group.
  • the reference face corresponding to the first image group can be obtained.
  • the second difference value between the face image to be recognized and the first image group is the calculated difference value between the face image to be recognized and each reference face image group in performing the above step S101 The difference value between the face image and the first image group.
  • the first difference value of each face image in the reference face image group corresponding to the first image group and the first difference value of the first image group can be calculated by calculating the respective face images in the reference face image group corresponding to the first image group.
  • the similarity between the third feature vector of and the second feature vector of the first image group is determined.
  • the determination process is similar to the above-mentioned process of calculating the difference between the face image to be recognized and each reference face image group, and will not be repeated here.
  • the magnitude relationship between the second difference value and each first difference value can be determined, so as to determine whether the reference face image group corresponding to the first image group is in the reference face image group.
  • a certain first difference value is greater than the second difference value, it can be indicated that the difference between the face image corresponding to the first difference value and the first image group is greater than the difference between the face image to be recognized and the first image group Difference, that is, relative to the face image corresponding to the first difference value, the similarity between the face image to be recognized and the first image group is higher.
  • the face image corresponding to the first difference value may be used as the face image to be replaced.
  • the larger the first difference value the lower the similarity between the image corresponding to the first difference value and the first image group. Therefore, it is determined that there is a first difference value in the first image group that is greater than the second difference.
  • the largest first difference value is greater than the second difference value, so the face image corresponding to the largest first difference value That is, it can be used as the face image to be replaced, so that the face image corresponding to the largest first difference value in the reference face image group corresponding to the first image group is replaced by the face image to be recognized;
  • the face image to be recognized can be directly replaced with the face image.
  • the face images in various certificates of a person have high definition and can more accurately and comprehensively characterize the face features of the user, for example, the face images in the ID card, the people in the driver's license Therefore, in order to improve the accuracy of face recognition, usually the face image in the person's certificate can be used as the face image in each reference face image group.
  • the first image group includes a reference face image group and a dynamic face image group, that is, the reference face image group corresponding to the first image group includes a reference face image group and a dynamic face image group an image group, and the reference face image group includes at least one reference face image;
  • using the face image to be recognized to update the reference face image group corresponding to the first image group may include the following step 26:
  • Step 26 Keep the reference face image group, and update the dynamic face image group with the face image to be recognized.
  • the face images in the reference face image group may be undeletable face images specified when constructing the first image group, and these face images may generally be of high definition, and An image that can more accurately and comprehensively characterize the user's facial features, such as a facial image in an ID card, a facial image in a driver's license, and the like.
  • the face images in the dynamic face image group may be the collected face images to be recognized that are gradually added during the face recognition process, and these face images may be deleted and replaced, thus, replaced with new people. face image.
  • the first difference between each face image in the dynamic face image group and the first image group can be acquired value, and judge whether there is a face image whose first difference value is greater than the second difference value between the face image to be recognized and the first image group in the dynamic face image group, and when the judgment result is existence, delete the dynamic face image
  • the face image corresponding to the largest first difference value in the face image group is added, and the face image to be recognized is added to the dynamic face image group.
  • the face image replaced by the face image to be recognized is the face image in the dynamic face image group, and in the subsequent face recognition process, the face image to be recognized may also be deleted and replaced.
  • the face region in the acquired face image to be recognized may be a non-vertical region. Since the face image to be recognized is vertical, the non-vertical face area will affect the accuracy of face recognition. In order to improve the accuracy of face recognition, when the face area in the face image to be recognized is When it is a non-vertical area, the face images to be recognized can be aligned to obtain the aligned face images to be recognized. The face region in the aligned face image to be recognized is a vertical region.
  • the face area in the face image to be recognized is a non-vertical area
  • align Fig. 4(a) to obtain the aligned image as shown in Fig. 4(b)
  • the face image to be recognized wherein the face area in the aligned face image to be recognized as shown in FIG. 4(b) is a vertical area.
  • Step 3 Align the face images to be recognized.
  • the step of calculating the difference value between the face image to be recognized and each reference face image group may include the following step 1010:
  • Step 1010 Calculate the difference between the aligned face image to be recognized and each reference face image group.
  • the first image group in the above step S102 may be a reference face image group whose difference value from the aligned face image to be recognized is smaller than the first threshold value.
  • the second image group in the above step S104 may be a reference face image group whose difference value from the aligned face image to be recognized is not less than the first threshold value and less than the second threshold value.
  • the way of aligning the face images to be recognized may be to calculate the transformation matrix according to the detected key points in the face region of the face images to be recognized, thus, The non-vertical face image to be identified is converted into a vertical face image to be identified through the transformation matrix, that is, the face image to be identified is aligned through the transformation matrix.
  • the left and right cheeks, the eyes, and the left and right corners of the mouth should be symmetrical, for example, the key points corresponding to the two eyes should be on the same horizontal line, that is, in the face image to be recognized
  • the y-coordinates of the key points corresponding to the two eyes are the same, that is, in the two-dimensional coordinate system corresponding to the face image to be recognized, the vertical axis coordinates of the key points corresponding to the two eyes are the same.
  • the relative positions of the detected key points can be fixed and predicted. Therefore, only need to zoom, translate and rotate the face region in the face image to be recognized to obtain the vertical face region.
  • each face image to be recognized can be understood as a two-dimensional matrix with coordinates, the following coordinates Transform.
  • c is the scaling ratio
  • the translation matrix is: Wherein, t x and ty are respectively the horizontal displacement of the horizontal axis and the vertical displacement of the vertical axis in the two-dimensional coordinate system corresponding to the face image to be recognized;
  • the rotation matrix is: where ⁇ is the angle to be rotated.
  • each acquired face image of the person may be stored.
  • judging whether the person corresponding to the face image to be recognized is a tracked person may include the following steps 1051-1055:
  • Step 1051 Determine a face image to be tracked from a third image group, wherein the third image group is formed by a face image acquired within a preset time range before acquiring the face image to be recognized,
  • Step 1052 Calculate the third difference value between the face image to be tracked and each reference face image group.
  • Step 1053 Determine whether there is a fourth image group according to the third difference value; if there is a fourth image group, go to step 1054; wherein, the fourth image group is the face image to be tracked.
  • the third difference value is less than the first threshold The value of the reference face image group.
  • Step 1054 Determine whether the fourth image group and the second image group are the same reference face image group; if so, go to step 1055.
  • Step 1055 Determine the person corresponding to the face image to be recognized as the tracked person.
  • a third image group may be set to store the face images acquired within a preset time range before the acquisition of the to-be-recognized face image, so that when determining the person corresponding to the to-be-recognized face image Whether it is a person to be tracked, the face image to be tracked can be obtained from the third image group, and the third difference value between the obtained face image to be tracked and each reference person image group can be further calculated; For the third difference value, a reference face image group whose third difference value from the face image to be tracked is smaller than the first threshold value is determined, that is, a fourth image group is determined.
  • the identity of the person corresponding to the face image to be tracked and the identity of the person corresponding to the fourth image group are the same . Therefore, if the fourth image group and the second image group are the same reference face image group, then, since the person corresponding to the face image to be tracked is the person to be tracked, it can be explained that the person corresponding to the second image group is the person to be tracked. track people.
  • the difference value between the face image to be recognized and the second image group is smaller than the second threshold value, that is, the probability that the identity of the person corresponding to the face image to be recognized is the same as the identity of the person corresponding to the second image group is relatively high, Therefore, it can be explained that the person corresponding to the face image to be recognized is the person to be tracked.
  • the embodiments of the present invention further provide a face recognition device.
  • FIG. 5 is a schematic structural diagram of a face recognition device according to an embodiment of the present invention. As shown in FIG. 5 , the device may include the following modules:
  • the difference value calculation module 510 is used to obtain the face image to be recognized, and calculate the difference value between the face image to be recognized and each reference face image group; wherein, each image included in each of the reference face image groups corresponds to information of the same person;
  • the first image group judgment module 520 is used to judge whether there is a first image group according to the difference value. If there is a first image group, the first result determination module 530 is triggered; otherwise, the second image group judgment module 540 is triggered.
  • An image group is a reference face image group whose difference value from the face image to be recognized is less than the first threshold value;
  • the first result determination module 530 is configured to determine the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the first image group;
  • the second image group determination module 540 is configured to determine whether there is a second image group according to the difference value, and if the second image group exists, trigger the tracking person determination module 550;
  • the tracking person judging module 550 is used to judge whether the person corresponding to the face image to be recognized is a tracked person, and if it is a tracking person, the second result determination module 560 is triggered;
  • the second result determination module 560 is configured to determine the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the second image group.
  • the fallback threshold mechanism is used, and through the judgment of the second threshold value, in the case where the recognition result of the face image to be recognized cannot be determined by the first threshold value, To achieve successful recognition of the face image to be recognized. Therefore, during the continuous recognition process of the person corresponding to the face image to be recognized, the possibility of alternately occurrence of recognition success and recognition failure caused by the change of the position state of the person can be reduced. In this way, the robustness of face recognition can be improved, so as to continuously provide personalized response actions for the person corresponding to the face image to be recognized, and improve the user experience of the person.
  • an embodiment of the present invention also provides an electronic device, as shown in FIG. 6 , including a processor 601, a communication interface 602, a memory 603 and a communication bus 604, Among them, the processor 601, the communication interface 602, and the memory 603 complete the communication with each other through the communication bus 604,
  • the memory 603 is used to store computer programs.
  • the processor 601 is configured to implement the steps of any of the above-mentioned face recognition methods provided by the embodiments of the present invention when executing the program stored in the memory 603 .
  • the communication bus mentioned in the above electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like.
  • PCI peripheral component interconnect standard
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above electronic device and other devices.
  • the memory may include random access memory (Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk storage.
  • RAM Random Access Memory
  • NVM non-Volatile Memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processing, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • a computer-readable storage medium is also provided, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, implements any of the above-mentioned embodiments of the present invention. Identify the steps of the method.
  • a computer program product containing instructions, which, when run on a computer, cause the computer to execute the steps of any of the above-mentioned face recognition methods provided by the embodiments of the present invention.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • the recognition result of the image is obtained, it can be further determined whether there is a second image group whose difference value from the face image to be recognized is not less than the first threshold value and less than the second threshold value in each reference face image group. Furthermore, when the judgment result is that there is a second image group, it can be further judged whether the person corresponding to the face image to be recognized is a tracked person, so that when it is judged that the person is a tracked person, it can be determined to be recognized.
  • the person corresponding to the face image is the person corresponding to the second image group. Therefore, based on the personnel information corresponding to the second image group, the identity of the person corresponding to the face image to be recognized can be determined, so as to realize the success of the face image to be recognized. identify.
  • the face image to be identified is the person to be tracked, before the face image to be identified, other collected face images of the person can be successfully identified. Therefore, in the recognition process of the face image to be recognized, even if the difference value between the face image to be recognized and each reference face image group is not less than the first threshold value, then, when there is the above-mentioned second image group, It can be explained that the person corresponding to the face image to be recognized is the person corresponding to the tracked second image group, and the face image to be recognized can be a non-frontal face image of the corresponding person.
  • the person corresponding to the face image to be recognized is the person being tracked, even if the face image to be recognized used for face recognition is a non-frontal face image of the person, the The successful identification of the person can be realized according to the double restriction conditions of the first threshold value and the second threshold value. Therefore, during the continuous recognition process of the person corresponding to the face image to be recognized, the possibility of alternately occurrence of recognition success and recognition failure caused by the change of the position state of the person can be reduced.
  • the above-mentioned method of recognizing the face image to be recognized according to the double restriction conditions of the first threshold value and the second threshold value may be called a fallback threshold mechanism.
  • the treatment can be realized.
  • Successful recognition of face images Therefore, during the continuous recognition process of the person corresponding to the face image to be recognized, the possibility of alternately occurring recognition success and recognition failure caused by the change of the position state of the person can be reduced. In this way, the robustness of face recognition can be improved, so as to continuously provide personalized response actions for the person corresponding to the face image to be recognized, and improve the user experience of the person.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种人脸识别方法、装置及电子设备,应用于人工智能技术领域。该方法包括:获取待识别人脸图像,并计算待识别人脸图像与各参考人脸图像组的差异值(S101);根据差异值判断是否存在第一图像组(S102);当存在第一图像组时,基于第一图像组对应的人员信息,得到待识别人脸图像对应的人员身份(S103);当不存在第一图像组时,判断是否存在第二图像组(S104);若存在第二图像组,则判断待识别人脸图像对应的人员是否为被跟踪人员(S105);如果为被跟踪人员,基于第二图像组对应的人员信息,得到待识别人脸图像对应的人员身份(S106)。

Description

一种人脸识别方法、装置及电子设备 技术领域
本公开涉及人工智能技术领域,特别是涉及一种人脸识别方法、装置及电子设备。
背景技术
当前,为了便于驾乘人员在车辆行驶过程中,查看车辆内外环境的情况,以提高车辆行驶的安全性,通常可以在车辆中安装车载摄像头。
其中,在车辆行驶过程中,车载摄像头可以实时采集车辆中的驾乘人员的脸部图像,从而,根据所采集到的人脸图像识别驾乘人员的身份,以便于可以根据驾乘人员的身份提供个性化响应动作,提高驾乘人员的用户体验。
例如,根据不同驾乘人员的习惯,自动调整座椅椅背倾斜角度、后视镜位置等,又例如,根据不同驾乘人员的喜好,自动进行歌曲推荐等。
在相关技术中,车载摄像头实时采集驾乘人员的脸部图像,并对每一帧脸部图像进行人脸识别,然而,在某一次脸部图像采集时,由于驾乘人员可能处于低头、侧脸与人交谈、面向车窗等位置状态,从而,导致车载摄像头所采集到的人脸图像采集不到驾乘人员的正面的脸部图像,进而,导致对驾乘人员身份的识别失败;而在下一次脸部图像采集时,由于驾乘人员的脸部朝向车载摄像头,则车载摄像头可以采集到的驾乘人员的正面的脸部图像,从而,可以成功识别驾乘人员的身份。显然,在对驾乘人员的人脸识别过程中,由于驾乘人员可能处于不同的位置状态中,从而,上述情况可以多次出现。
基于此,在上述相关技术中,针对同一驾乘人员,在对该驾乘人员的人脸图像进行持续识别的过程中,可能出现识别成功与识别失败交替出现的情况,即人脸识别的鲁棒性较差,从而,可能导致无法持续地为驾乘人员提供个性化响应动作,严重影响驾乘人员的用户体验。
发明内容
根据第一方面,本发明的一个或多个实施例提供了一种人脸识别方法,所述方法包括:
获取待识别人脸图像,并计算所述待识别人脸图像与各参考人脸图像组的差异值;其中,每一所述参考人脸图像组包括的各个图像对应于同一人员信息;
根据所述差异值判断是否存在第一图像组,其中,所述第一图像组为:与所述待识别人脸图像的差异值小于第一门限值的参考人脸图像组;
当存在第一图像组时,基于所述第一图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份;
当不存在所述第一图像组时,根据所述差异值判断是否存在第二图像组,其中,所述第二图像组为与所述待识别人脸图像的差异值不小于所述第一门限值,且小于第二门限值的参考人脸图像组,其中,所述第一门限值小于所述第二门限值;
若存在所述第二图像组,则判断所述待识别人脸图像对应的人员是否为被跟踪人员;
如果判断所述待识别人脸图像对应的人员为所述被跟踪人员,基于所述第二图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份。
根据一个或多个实施例,所述计算所述待识别人脸图像与各组参考人脸图像的差异值的步骤,包括:
分别计算所述待识别人脸图像与各所述参考人脸图像组的相似度;
基于所述相似度,分别计算所述待识别人脸图像与各所述参考人脸图像组的差异值。
根据一个或多个实施例,所述计算所述待识别人脸图像与各所述参考人脸图像组的相似度的步骤,包括:
提取所述待识别人脸图像的第一特征向量;
分别计算各所述参考人脸图像组的第二特征向量,其中,针对每一所述参考人脸图像组,该参考人脸图像组的第二特征向量是基于该参考人脸图像组中所有人脸图像的第三特征向量获取的;
分别计算所述待识别人脸图像的所述第一特征向量与各参考人脸图像组的所述第二特征向量的相似度,作为所述待识别人脸图像与各所述参考人脸图像组的相似度。
根据一个或多个实施例,所述方法还包括:
当存在所述第一图像组时,判断所述待识别人脸图像与所述第一图像组的差异值是否小于第三门限值,其中,所述第三门限值小于所述第一门限值;
如果判断所述待识别人脸图像与所述第一图像组的差异值小于第三门限值,利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组。
根据一个或多个实施例,所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组的步骤,包括:
判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量未达到预设数量,将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
根据一个或多个实施例,所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组的步骤,包括:
判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量达到预设数量,分别获取所述第一图像组对应的参考人脸图像组中的各所述个人脸图像与所述第一图像组的第一差异值,并获取所述待识别人脸与所述第一图像组的第二差异值;
判断所述第一图像组对应的参考人脸图像组中是否存在所述第一差异值大于所述第二差异值的人脸图像;
若判断所述第一图像组对应的参考人脸图像组中存在所述第一差异值大于所述第二差异值的人脸图像,删除所述第一图像组对应的参考人脸图像组中最大的所述第一差异值对应的人脸图像,并将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
一个或多个实施例中,所述第一图像组包括基准人脸图像组以及动态人脸图像组,其中,所述基准人脸图像组包括至少一张基准人脸图像,其中,所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组的步骤,包括:
保持所述基准人脸图像组,并利用所述待识别人脸图像更新所述动态人脸图像组。
根据一个或多个实施例,所述判断所述待识别人脸图像对应的人员是否为被跟踪人员的步骤,包括:
从第三图像组中确定一待跟踪人脸图像,其中,所述第三图像组是由获取所述待识别人脸图像前,预设时间范围内获取的人脸图像形成的;
计算所述待跟踪人脸图像与各所述参考人脸图像组的第三差异值;
根据所述第三差异值判断是否存在第四图像组,其中,所述第四图像组为与所述待跟踪人脸图像的差异值小于所述第一门限值的参考人脸图像组;
若存在所述第四图像组,判断所述第四图像组与所述第二图像组是否为同一参考人脸图像组;
若判断所述第四图像组与所述第二图像组为同一参考人脸图像组,则确定所述待识别人脸图像对应的人员为所述被跟踪人员。
根据第二方面,本发明的一个或多个实施例提供了一种人脸识别装置,所述装置包括:
差异值计算模块,用于获取待识别人脸图像,并计算所述待识别人脸图像与各参考人脸图像组的差异值,其中,每一所述参考人脸图像组包括的各个图像对应于同一人员信息;
第一图像组判断模块,用于根据所述差异值判断是否存在第一图像组,其中,若存在所述第一图像组,所述第一图像组判断模块触发第一结果确定模块,否则,触发第二图像组判断模块,其中,所述第一图像组为:与所述待识别人脸图像的差异值小于第一门限值的参考人脸图像组;
所述第一结果确定模块,用于基于所述第一图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份;
所述第二图像组判断模块,用于根据所述差异值判断是否存在第二图像组,其中,若存在所述第二图像组,所述第二图像组判断模块触发跟踪人员判断模块;
所述跟踪人员判断模块,用于判断所述待识别人脸图像对应的人员是否为被跟踪人员,其中,若判断所述待识别人脸图像对应的人员为被跟踪人员,所述跟踪人员判断模块触发第二结果确定模块;
所述第二结果确定模块,用于基于所述第二图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份。
根据一个或多个实施例,在执行所述计算所述待识别人脸图像与各组参考人脸图像的差异值时,所述差异值计算模块用于:
分别计算所述待识别人脸图像与各所述参考人脸图像组的相似度;
基于所述相似度,分别计算所述待识别人脸图像与各所述参考人脸图像组的差异值。
根据一个或多个实施例,在执行所述计算所述待识别人脸图像与各所述参考人脸图像组的相似度时,所述差异值计算模块用于:
提取所述待识别人脸图像的第一特征向量;
分别计算各所述参考人脸图像组的第二特征向量,其中,针对每一所述参考人脸图像组,该参考人脸图像组的所述第二特征向量是基于该参考人脸图像组中所有人脸图像的第三特征向量获取的;
分别计算所述待识别人脸图像的所述第一特征向量与各所述参考人脸图像组的所述第二特征向量的相似度,作为所述待识别人脸图像与各所述参考人脸图像组的所述相似度。
根据一个或多个实施例,所述第一图像组判断模块还用于:
当存在所述第一图像组时,判断所述待识别人脸图像与所述第一图像组的差异值是否小于第三门限值,其中,所述第三门限值小于所述第一门限值;
如果判断所述待识别人脸图像与所述第一图像组的差异值小于第三门限值,利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组。
根据一个或多个实施例,在执行所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组时,所述第一图像组判断模块用于:
判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量未达到预设数量,将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
根据一个或多个实施例,在执行所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组时,所述第一图像组判断模块用于:
判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量达到预设数量,分别获取所述第一图像组对应的参考人脸图像组中的各所述个人脸图像与所述第一图像组的第一差异值,并获取所述待识别人脸与所述第一图像组的第二差异值;
判断所述第一图像组对应的参考人脸图像组中是否存在所述第一差异值大于所述第二差异值的人脸图像;
若判断所述第一图像组对应的参考人脸图像组中存在所述第一差异值大于所述第二差异值的人脸图像,删除所述第一图像组对应的参考人脸图像组中最大的所述第一差异值对应的人脸图像,并将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
根据一个或多个实施例,所述第一图像组包括基准人脸图像组以及动态人脸图像组,其中,所述基准人脸图像组包括至少一张基准人脸图像,其中,在执行所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组时,所述第一图像组判断模块用于:
保持所述基准人脸图像组,并利用所述待识别人脸图像更新所述动态人脸图像组。
根据一个或多个实施例,在执行所述判断所述待识别人脸图像对应的人员是否为被跟踪人员时,所述跟踪人员判断模块用于:
从第三图像组中确定一待跟踪人脸图像,其中,所述第三图像组是由获取所述待识别人脸图像前,预设时间范围内获取的人脸图像形成的;
计算所述待跟踪人脸图像与各所述参考人脸图像组的第三差异值;
根据所述第三差异值判断是否存在第四图像组,其中,所述第四图像组为与所述待跟踪人脸图像的差异值小于所述第一门限值的参考人脸图像组;
若存在所述第四图像组,判断所述第四图像组与所述第二图像组是否为同一参考人脸图像组;
若判断所述第四图像组与所述第二图像组为同一参考人脸图像组,则确定所述待识别人脸图像对应的人员为所述被跟踪人员。
根据第三方面,本发明的一个或多个实施例提供了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信;
所述存储器用于存放计算机程序;
所述处理器用于执行所述存储器上所存放的程序,以实现上述第一方面提供的任一人脸识别方法的步骤。
根据第四方面,本发明的一个或多个实施例提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器 执行时实现上述第一方面提供的任一人脸识别方法。
根据第五方面,本发明的一个或多个实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面提供的任一人脸识别方法。
附图说明
为了更清楚地说明本发明的实施例,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为根据本发明一个或多个实施例提供的人脸识别方法的流程示意图;
图2(a)为根据本发明一个或多个实施例检测到人脸中的关键点的数量为5点的示意图;
图2(b)为根据本发明一个或多个实施例检测到人脸中的关键点的数量为68点的示意图;
图2(c)为图2(b)的关键点表征的示意图;
图3为根据本发明一个或多个实施例,第一特征向量和某个第二特征向量位于二维坐标系中的示意图;
图4(a)为根据本发明一个或多个实施例,待识别人脸图像中的人脸区域为非垂直区域的示意图;
图4(b)为对图4(a)进行对齐后所得到的对齐后的待识别人脸图像中的人脸区域为垂直区域的示意图;
图5为根据本发明一个或多个实施例的人脸识别装置的结构示意图;
图6为根据本发明一个或多个实施例的电子设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明的实施例进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开的范围。
相关技术中,针对每个驾乘人员,在对该驾乘人员的人脸图像进行持续识 别的过程中,可能出现识别成功与识别失败交替出现的情况,即人脸识别的鲁棒性较差,从而,可能导致无法持续地为驾乘人员提供个性化响应动作,严重影响驾乘人员的用户体验。
本发明的一个或多个实施例提供了一种人脸识别方法。该方法可以适用于任一需要对人脸图像进行持续识别的场景。例如,车辆行驶过程中,利用车载摄像头采集到的人脸图像对驾乘人员进行持续识别;又例如,利用道路交通摄像头,对行人的人脸图像进行持续识别等。
此外,该方法可以应用于任一类型的电子设备,即该方法的执行主体可以为任一类型的电子设备,例如,车载摄像头、手机、笔记本电脑、台式电脑等。为了行文清晰,以下简称电子设备。
其中,对于能够直接采集人脸图像的电子设备而言,例如,车载摄像头等,其可以直接执行本发明实施例提供的方法,实现对所采集到的人脸图像的持续识别,也可以将所采集到的人脸图像发送至用于执行本发明实施例提供的方法的其他电子设备(例如,台式电脑等)中,以使该其他电子设备执行本发明实施例提供的方法,实现对所采集到的人脸图像的持续识别;对于不能直接采集人脸图像的电子设备(例如,台式电脑等)而言,其可以从能够直接采集人脸图像的其他电子设备(例如,车载摄像头等)处,获取该其他电子设备采集到的人脸图像,并进而执行本发明实施例提供的方法,实现对所获取到的人脸图像的持续识别。这都是合理的。
基于此,本发明实施例不对本发明实施例所提供的一种人脸识别方法的适用场景和执行主体进行限定。
根据本发明一个或多个实施例提供的一种人脸识别方法可以包括如下步骤:
获取待识别人脸图像,并计算所述待识别人脸图像与各参考人脸图像组的差异值,其中,每一所述参考人脸图像组包括的各个图像对应于同一人员信息;
根据所述差异值判断是否存在第一图像组,其中,所述第一图像组为与待识别人脸图像的差异值小于第一门限值的参考人脸图像组;
当存在第一图像组时,基于第一图像组对应的人员信息,确定待识别人脸图像对应的人员身份;
当不存在第一图像组时,根据差异值判断是否存在第二图像组,其中,第二图像组为与待识别人脸图像的差异值不小于第一门限值,且小于第二门限值 的参考人脸图像组,其中,第一门限值小于第二门限值;
若存在第二图像组,则判断待识别人脸图像对应的人员是否为被跟踪人员;
如果判断待识别人脸图像对应的人员为被跟踪人员,基于第二图像组对应的人员信息,确定待识别人脸图像对应的人员身份。
根据一个或多个实施例,针对待识别人脸图像,当待识别人脸图像与各参考人脸图像组的差异值均不小于第一门限值,导致无法通过一次判断得到待识别人脸图像的识别结果时,可以进一步判断各参考人脸图像组中,是否存在与待识别人脸图像的差异值不小于第一门限值,且小于第二门限值的第二图像组。进而,当判断结果为存在第二图像组时,便可以再进一步判断待识别人脸图像对应的人员是否为被跟踪人员,从而,在判断出该人员为被跟踪人员时,便可以确定待识别人脸图像对应的人员即为第二图像组所对应的人员,从而,便可以基于第二图像组对应的人员信息,确定待识别人脸图像对应的人员身份,实现对待识别人脸图像的成功识别。
根据一个或多个实施例,由于待识别人脸图像为被跟踪人员,因此,在该待识别人脸图像之前,可以实现对采集到的该人员的其他人脸图像的成功识别。从而,在对待识别人脸图像的识别过程中,即使该待识别人脸图像与各参考人脸图像组的差异值均不小于第一门限值,那么,当存在上述第二图像组时,可以说明该待识别人脸图像对应的人员即为被跟踪的第二图像组所对应的人员,且该待识别人脸图像可以为所对应人员的非正面的人脸图像。
也就是说,当待识别人脸图像所对应的人员为被跟踪人员时,即使某次进行人脸识别所采用的待识别人脸图像为该人员的非正面的人脸图像的情况下,仍然可以依据第一门限值和第二门限值的双重限制条件,实现对该人员的成功识别。从而,在对待识别人脸图像所对应人员的持续识别过程中,可以减低出现由于该人员的位置状态变化而导致的识别成功与识别失败交替出现的情况的可能性。其中,上述依据第一门限值和第二门限值的双重限制条件对待识别人脸图像进行识别的方式,可以称为回退阈值机制。
基于此,根据一个或多个实施例,利用回退阈值机制,通过第二门限值的判断,可以在无法通过第一门限值确定待识别人脸图像的识别结果的情况下,实现对待识别人脸图像的成功识别。从而,在对待识别人脸图像所对应人员的持续识别过程中,可以减低出现由于该人员的位置状态变化而导致的识别成功与识别失败交替出现的情况的可能性。这样,便可以提高人脸识别的鲁棒性, 以实现持续地为待识别人脸图像对应的人员提供个性化响应动作,提高该人员的用户体验。
下面,结合附图对根据本发明一个或多个实施例提供的人脸识别方法进行具体说明。
图1为根据本发明一个或多个实施例的人脸识别方法的流程示意图。如图1所示,该方法可以包括如下步骤:
S101:获取待识别人脸图像,并计算待识别人脸图像与各参考人脸图像组的差异值;
其中,每一参考人脸图像组包括的各个图像对应于同一人员信息;
需要说明的是,一个参考人脸图像组包括同一个人员对应的多个人脸图像,在最初设置各参考人脸图像组时,可以预先在各参考人脸图像组中指定该人员的基准人脸图像,其中,该基准人脸图像可以是清晰度较高,且能够较为准确和全面地表征该用户的人脸特征的图像,例如,可以为该人员的身份证上的人脸图像,又例如,可以为该人员的驾驶证上的人脸图像等。基于此,在最初设置各参考人脸图像组时,每一参考人脸图像组可以只包括该参考人脸图像组所对应人员的基准人脸图像。
进而,在人脸识别过程中,针对所识别到的某些人员,所采集到的用于识别该人员的人脸图像可能同样能够更为准确表征该用户的人脸特征,因此,可以将该采集到的人脸图像添加到该人员所对应的参考人脸图像组中,从而,使得预设的该人员对应的参考人脸图像组中的图像的数量增多,参考人脸图像组作为人脸识别的匹配标准时更加准确。
并且,每一参考人脸图像组中的各个图像对应于同一人员信息,从而,在对待识别人脸图像进行识别时,针对每一参考人脸图像组,可以直接计算该待识别人脸图像与各参考人脸图像组的差异值,而无需计算该待识别人脸图像与每个参考人脸图像的差异值。
根据一个或多个实施例,上述各参考人脸图像组可以存储在预设的参考图像库中,从而,在开始构建该参考图像库时,且人脸识别尚未开始时,针对人脸识别过程中,可能识别得到的每个人员,该参考图像库中存放该人员至少一张人脸图像。同一人员对应的人脸图像形成一个参考人脸图像组。在开始识别后,可以将采集到的人脸图像添加到参考图像库该人脸图像对应的参考人脸图像组中。
基于此,在获取到待识别人脸图像时,便可以进一步计算该待识别人脸图像与各参考人脸图像组的差异值。
根据一个或多个实施例,上述待识别人脸图像可以是通过对图像采集设备所采集到的人员图像进行人脸检测而得到的该人员图像中人脸位置所在的矩形图像区域。
其中,人脸检测可以通过DNN(Deep Neural Network,深层神经网络)模型实现,例如,MTCNN(Multi-task Cascaded Convolutional Networks,多任务卷积神经网络)、FisherFace人脸识别算法、SSD(single shot multibox detector)算法、YOLO算法等。并且,当前较为常用的人脸识别深度神经网络模型为RetinaFace人脸识别算法。
根据一个或多个实施例,在利用模型进行人脸检测时,模型的输入为所采集到的人员图像,检测后输出的即为用于表征该人员图像中人脸位置所在的长方形图像区域。其中,针对该人员图像中的每个人脸区域,输出一个矩形的图像区域。
并且,在利用模型进行人脸检测时,还可以在所输出的该人员图像中人脸位置所在的矩形的图像区域中,检测得到该人脸中的关键点。其中,该关键点的数量可以为5点,也可以为68点。例如,如图2(a)所示,所检测到关键点的数量为5点,包括两只眼睛、鼻子和两个嘴角。如图2(b)所示,所检测到的关键点的数量为68点,从而,可以包含该人脸中的更准确的位置,其中,图2(c)为图2(b)的关键点表征。
其中,待识别人脸图像与每一参考人脸图像组的差异值可以表征该待识别人脸图像的特征与该参考人脸图像组中的人脸图像的特征的差异,从而,该差异值越小,则说明该待识别人脸图像的特征与该参考人脸图像组中的人脸图像的特征的差异越小,即该待识别人脸图像的特征与该参考人脸图像组中的人脸图像的特征的相似度越高,该待识别人脸图像对应的人员信息是该参考人脸图像组对应的人员信息的可能性就越高。
后续将会对上述各个差异值的计算方式进行举例说明。
S102:根据差异值判断是否存在第一图像组;若存在第一图像组,执行步骤S103;否则,执行步骤S104。
其中,第一图像组为与待识别人脸图像的差异值小于第一门限值的参考人脸图像组。
在计算得到待识别人脸图像与预设的各参考人脸图像组的差异值后,便可以针对每一参考人脸图像组,判断待识别人脸图像与该参考人脸图像组的差异值是否小于第一门限值。当确定存在一参考人脸图像组,使得待识别人脸图像与该参考人脸图像组的差异值小于第一门限值时,便可以将该参考人脸图像组确定为第一图像组。即当确定出待识别人脸图像与某参考人脸图像组的差异值小于第一门限值时,则可以确定存在第一图像组,且该第一图像组即为该与待识别人脸图像的差异值小于第一门限值的参考人脸图像组。
S103:基于第一图像组对应的人员信息,得到待识别人脸图像对应的人员身份。
由于待识别人脸图像与第一图像组的差异值小于第一门限值,则可以说明待识别人脸图像所对应的人员身份即为第一图像组所对应的人员身份,也就是说,可以说明待识别人脸图像所对应的人员与第一图像组所对应的人员为同一人员。则此时,可以确定对待识别人脸图像的人脸识别成功,进而,便可以基于第一图像组对应的人员信息,得到待识别人脸图像对应的人员身份。
根据一个或多个实施例,第一图像组对应的人员信息即第一图像组对应的参考人脸图像组的人员信息,可以至少包括以下各类信息中的一种:人员姓名、人员性别、人员喜好、人员籍贯、人员习惯等。当然,也可以包括其他类型的信息。
根据一个或多个实施例,在得到待识别人脸图像对应的人员身份后,便可以进一步从第一图像组对应的人员信息中确定待识别人脸图像对应的人员信息。
其中,可以将第一图像组对应的人员信息全部作为待识别人脸图像对应的人员信息,也可以将第一图像组对应的人员信息中的部分信息作为待识别人脸图像对应的人员信息。例如,在对驾乘人员进行人脸识别时,可以从该第一图像组对应的人员信息中选取人员喜好和人员乘车习惯,作为待识别人脸图像对应的人员信息。
根据一个或多个实施例,在得到待识别人脸图像对应的人员信息后,可以进一步根据所得到的人员信息为该待识别人员图像对应的人员提供个性化的响应操作。例如,在对驾乘人员进行人脸识别时,可以根据所得到的待识别人脸图像对应的人员信息中的人员喜好和人员乘车习惯,调整该待识别人脸图像对应的人员所在座位的椅背倾斜角度,并为该人员推荐电台节目。
S104:根据差异值判断是否存在第二图像组;若存在第二图像组,执行步骤S105。
其中,第二图像组为与待识别人脸图像的差异值不小于第一门限值,且小于第二门限值的参考人脸图像组,其中,第一门限值小于第二门限值;
当判断得到待识别人脸图像与各参考人脸图像组的差异值均不小于第一门限值时,则说明各参考人脸图像组中不存在第一图像组,即认为没有与待识别人脸图像特别匹配的参考人脸图像组。进而,可以进一步判断待识别人脸图像与各参考人脸图像组的差异值是否小于第二门限值,其中,第一门限值小于第二门限值。也就是说,可以进一步确定是否存在与待识别人脸图像的差异值位于第一门限值和第二门限值之间的参考人脸图像组,即根据第二门限值判断是否存在与待识别人脸图像比较匹配的参考人脸图像组。
若存在第二图像组,则该与待识别人脸图像的差异值不小于第一门限值,且小于第二门限值的参考人脸图像组即为第二图像组。相应的,若不存在第二图像组,则可以确定对待识别人脸图像的识别失败,无法确定待识别人脸图像所对应的人员信息。
S 105:判断待识别人脸图像对应的人员是否为被跟踪人员;如果是被跟踪人员,执行步骤S106。
其中,当确定出存在第二图像组时,则可以说明该第二图像组所对应的人员可能与待识别人脸图像对应的人员为同一人员。进而,为了确定该第二图像组所对应的人员与待识别人脸图像对应的人员是否为同一人员,可以进一步判断待识别人脸图像对应的人员是否为被跟踪人员。
也就是说,可以通过判断待识别人脸图像对应的人员是否为被跟踪人员来判断待识别人脸图像的图像采集对象是否始终处于图像采集设备采集得到的采集图像内,从而,确定上述不存在第一图像组的原因是否是由于所被跟踪人员的位置状态改变导致的。
其中,可以通过多种方式判断待识别人脸图像对应的人员是否为被跟踪人员,对此本发明实施例不做具体限定。根据一个或多个实施例,可以利用预设的跟踪算法判断待识别人脸图像对应的人员是否为被跟踪人员,即判断待识别人脸图像对应的人员是否始终处于图像设备所采集到的采集图像内。其中,上述预设的跟踪算法可以为SORT(Simple Online and Realtime Tracking,目标跟踪算法)和DeepSORT(Deep Simple Online and Realtime Tracking,多目标跟踪 算法),当然,上述预设的跟踪算法还可以为其他跟踪算法,这都是合理的。
其中,可以将检测到的人脸图像与预先存储的人脸图像进行一一对比,当检测到的人脸图像与预设存储的某张人脸图像的相似度较高时,便可以确定该人脸图像所对应人员与该张预先存储的人脸图像所对应人员为同一人员,并且,由于该张预先存储的人脸图像所对应的人员为被跟踪人员,则可以确定所检测到的人脸图像所对应的人员为被跟踪人员。
S106:基于第二图像组对应的人员信息,得到待识别人脸图像对应的人员身份。
若判断得到待识别人脸图像对应的人员为被跟踪人员,则可以说明待识别人脸图像的图像采集对象始终处于图像采集设备采集得到的采集图像内,从而,可以确定上述不存在第一图像组的原因是由于所被跟踪人员的位置状态改变导致的,进而,可以说明待识别人脸图像对应的人员身份即为该第二图像组所对应的人员身份。
并且,待识别人脸图像与第二图像组的差异值不小于上述第一门限值的原因可以是由于待识别人脸图像对应的人员的位置状态改变,导致所得到的待识别人脸图像为该人员的非正面图像,进而,导致待识别人脸图像与第二图像组的差异值较高,且这种情况造成的该差异值并未高于上述第二门限值。
这样,由于待识别人脸图像对应的人员为被跟踪人员,则可以说明待识别人脸图像所对应的人员身份即为第二图像组所对应的人员身份,也就是说,可以说明待识别人脸图像所对应的人员与第二图像组所对应的人员为同一人员。则此时,可以确定对待识别人脸图像的人脸识别成功,进而,便可以基于第二图像组对应的人员信息,得到待识别人脸图像对应的人员身份。
根据一个或多个实施例,第二图像组对应的人员信息即第二图像组对应的参考人脸图像组的人员信息,可以至少包括以下各类信息中的一种:人员姓名、人员性别、人员喜好、人员籍贯、人员习惯等。当然,也可以包括其他类型的信息。
根据一个或多个实施例,在得到待识别人脸图像对应的人员身份后,便可以进一步从第二图像组对应的人员信息中确定待识别人脸图像对应的人员信息。
其中,可以将第二图像组对应的人员信息全部作为待识别人脸图像对应的人员信息,也可以将第二图像组对应的人员信息中的部分信息作为待识别人脸 图像对应的人员信息。例如,在对驾乘人员进行人脸识别时,可以从该第二图像组对应的人员信息中选取人员喜好和人员乘车习惯,作为待识别人脸图像对应的人员信息。
根据一个或多个实施例,在得到待识别人脸图像对应的人员信息后,可以进一步根据所得到的人员信息为该待识别人员图像对应的人员提供个性化的响应操作。例如,在对驾乘人员进行人脸识别时,可以根据所得到的待识别人脸图像对应的人员信息中的人员喜好和人员乘车习惯,调整该待识别人脸图像对应的人员所在座位的椅背倾斜角度,并为该人员推荐电台节目。
相应的,若判断得到待识别人脸图像对应的人员不是被跟踪人员,则可以确定对待识别人脸图像的识别失败,无法确定待识别人脸图像对应的人员信息。
以上可见,根据一个或多个实施例,利用回退阈值机制,通过第二门限值的判断,可以在无法通过第一门限值确定待识别人脸图像的识别结果的情况下,实现对待识别人脸图像的成功识别。从而,在对待识别人脸图像所对应人员的持续识别过程中,可以减低出现由于该人员的位置状态变化而导致的识别成功与识别失败交替出现的情况的可能性。这样,便可以提高人脸识别的鲁棒性,以实现持续地为待识别人脸图像对应的人员提供个性化响应动作,提高该人员的用户体验。
下面,对上述步骤S101中,计算待识别人脸图像与各参考人脸图像组的差异值的方式进行举例说明。
根据一个或多个实施例,上述步骤S101可以包括如下步骤1011-1012。
步骤1011:分别计算待识别人脸图像与各参考人脸图像组的相似度。
在本步骤中,针对每一参考人脸图像组,可以首先计算待识别人脸图像与该参考人脸图像组的相似度,即待识别人脸图像与每一参考人脸图像组得到一个对应的相似度。因此,可以分别计算得到待识别人脸图像与各个参考人脸图像组的多个相似度。
步骤1012:基于相似度,分别计算待识别人脸图像与各参考人脸图像组的差异值。
在本步骤中,针对每一参考人脸图像组,可以在步骤S1011中计算得到待识别人脸图像与该参考人脸图像组的相似度后,基于该相似度,确定待识别人脸图像与该参考人脸图像组的差异值;也可以在步骤S1011中计算得到待识别人脸图像与全部参考人脸图像组的相似度后,同时确定待识别人脸图像与各参 考人脸图像组的差异值。
根据一个或多个实施例,待识别人脸图像与各参考人脸图像组中的第j个参考图像组的差异值可以用DIFF j表示。其中,1≤j≤m,m为各参考人脸图像组的组数,即预设了m个参考人脸图像组。
在本步骤中,待识别人脸图像与每一参考人脸图像组的差异值可以为预设值与该相似度的差值,即预设值减去待识别人脸图像与该参考人脸图像组的相似度所得到的差值,一般情况下,预设值为1。
根据一个或多个实施例,可以通过多种方式执行上述步骤1011,得到待识别人脸图像与各参考人脸图像组的相似度,对此,本发明实施例不做具体限定。
根据一个或多个实施例,上述步骤1011可以包括如下步骤1011A-1011C。
步骤1011A:提取待识别人脸图像的第一特征向量。
步骤1011B:分别计算各个参考人脸图像组的第二特征向量,其中,针对每一参考人脸图像组,该参考人脸图像组的第二特征向量是基于该参考人脸图像组中所有人脸图像的第三特征向量获取的。
具体来说,针对每一参考图像组,若该参考图像组中包括一个人脸图像,则该参考人脸图像组的第二特征向量为该参考人脸图像组所包括的人脸图像的第三特征向量;若该参考人脸图像组中包括多个人脸图像,则该参考人脸图像组的第二特征向量为该参考人脸图像组中所有人脸图像的第三特征向量的平均值。
步骤1011C:分别计算待识别人脸图像的第一特征向量与各参考人脸图像组的第二特征向量的相似度,作为待识别人脸图像与各组参考人脸图像的相似度。
在计算待识别人脸图像与各参考人脸图像组的相似度时,可以首先根据步骤1011A提取待识别人脸图像的第一特征向量以及根据步骤1011B计算各个参考人脸图像组的第二特征向量,通过计算该第一特征向量与各参考人脸图像组的第二特征向量的相似度,得到待识别人脸图像与各参考人脸图像组的相似度。其中,所计算得到的待识别人脸图像的第一特征向量与各参考人脸图像组的第二特征向量的相似度,即为待识别人脸图像与各参考人脸图像组的相似度。
需要说明的是,为了实现通过计算特征向量的相似度,得到待识别人脸图像与各参考人脸图像组的相似度,上述第一特征向量和各个第二特征向量为维度相同且数值类型相同的向量。进而,由于每一参考人脸图像组的第二特征向 量是基于该参考人脸图像组中所有人脸图像的第三特征向量获取的,因此,每一参考人脸图像组中所有图像的第三特征向量的维度和数值类型与上述第一特征向量和各个第二特征向量的维度和数值类型相同。
根据一个或多个实施例,上述待识别人脸图像的第一图像向量,以及各参考人脸图像组中所有人脸图像的第三特征向量可以是利用特征提取网络从人脸图像中提取到的高维向量。例如,上述待识别人脸图像的第一特征向量,以及各参考人脸图像组中所有人脸图像的第三特征向量,可以为利用InsightFace网络从待识别人脸图像,以及各参考人脸图像组中所有人脸图像中提取到的512维的浮点类型向量。
当然,上述特征提取网络还可以为其他网络,例如,FaceNet、SphereFace、CosFace等,并且,采用不同的特征提取网络所提取到的特征向量的维度和数值类型可以不同。
进而,可以通过多种方式计算上述待识别人脸图像的第一特征向量和每个第二特征向量的相似度。例如,可以通过计算上述第一特征向量和每个参考人脸图像组的第二特征向量的均方差、欧氏距离等,得到上述第一特征向量和每个第二特征向量的相似度,即待识别人脸图像与各参考人脸图像组的相似度。
示例性的,针对第一特征向量A和第二特征向量B,其均方差的计算公式为:
Figure PCTCN2021113209-appb-000001
其中,σ为第一特征向量A和第二特征向量B的均方差,n为第一特征向量A和第二特征向量B的维度,A i为第一特征向量A中的第i个元素值,B i为第二特征向量B中的第i个元素值,1≤i≤n。
示例性的,针对第一特征向量A和第二特征向量B,其欧式距离的计算公式为:
Figure PCTCN2021113209-appb-000002
其中,d为第一特征向量A和第二特征向量B的欧式距离,n为第一特征向量A和第二特征向量B的维度,A i为第一特征向量A中的第i个元素值,B i为第二特征向量B中的第i个元素值,1≤i≤n。
根据一个或多个实施例,可以通过计算上述第一特征向量和每个第二特征向量的夹角余弦值,得到上述第一特征向量和每个第二特征向量的相似度。也就是说,可以将所计算得到的上述第一特征向量和每个第二特征向量的夹角余 弦值,作为上述第一特征向量和每个第二特征向量的相似度。
例如,如图3所示,第一特征向量和某个第二特征向量分别为图3中的向量A和B,则向量A和B的夹角余弦值为:
Figure PCTCN2021113209-appb-000003
其中,cos(θ)为向量A和B的夹角余弦值,A x为向量A在二维图像坐标系中的横坐标,A y为向量A在二维图像坐标系中的纵坐标,B x为向量B在二维图像坐标系中的横坐标,B y为向量B在二维图像坐标系中的纵坐标。
进一步的,向量A和B的夹角余弦值还可以为:
Figure PCTCN2021113209-appb-000004
其中,n为向量A和B的维度,A i为向量A中的第i个元素值,B i为向量B中的第i个元素值,1≤i≤n。
当判断得到存在第一图像组时,则可以说明待识别人脸图像与第一图像组的相似度较高,从而,为了扩充参考人脸图像组以获得更加准确的人脸识别匹配标准,在判断得到存在第一图像组时,可以利用待识别人脸图像更新第一图像组对应的参考人脸图像组。
基于此,上述本发明实施例提供的一种人脸识别方法还可以包括如下步骤1-2。
步骤1:当存在第一图像组时,判断待识别人脸图像与第一图像组的差异值是否小于第三门限值,其中,第三门限值小于第一门限值;如果是,执行步骤2。
步骤2:利用待识别人脸图像更新第一图像组对应的参考人脸图像组。
根据一个或多个实施例,当判断得到存在第一图像组时,可以进一步判断待识别人脸图像与第一图像组的差异值是否小于第三门限值。
进而,由于第三门限值小于第一门限值,则当待识别人脸图像与第一图像组的差异值小于第三门限值时,可以说明待识别人脸图像与第一图像组的相似度较高,从而,可以利用待识别人脸图像更新该第一图像组,即更新该第一图像素对应的参考人脸图像组。这样,待识别人脸图像便可以成为在后续人脸识别过程中所利用的图像。
其中,由于每一参考人脸图像组的第二特征向量是基于该参考人脸图像组 中所有人脸图像的第三特征向量获取的,因此,在利用待识别人脸图像更新第一图像组后,需要更新该更新后的第一图像组的第二特征向量,即更新该更新后的第一图像组对应的参考人脸图像组的第二特征向量。
其中,更新该更新后的第一图像组的第二特征向量,可以先确定更新后的第一图像组中所有人脸图像的第三特征向量,然后基于更新后的第一图像组中所有人脸图像的第三特征向量计算更新后的第一图像组的第二特征向量。这样,在后续的人脸识别过程中,所利用的即为更新后的第一图像组的第二特征向量。其中,更新后的第一图像组的第二特征向量可以为更新后的第一图像组中所有人脸图像的第三特征向量的平均值。
进一步的,为了避免出现由于误差累积所导致的识别准确性降低的情况发生,每一参考人脸图像组中所包括的图像的数量不宜过多。
基于此,根据一个或多个实施例,上述步骤2中,利用待识别人脸图像更新第一图像组对应的参考人脸图像组可以包括如下步骤21-22:
步骤21:判断第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;如果否,执行步骤22。
步骤22:将待识别人脸图像添加至第一图像组对应的参考人脸图像组。
根据一个或多个实施例,第一图像组对应的参考人脸图像组可以包括预设数量个人脸图像,而当第一图像组对应的参考人脸图像组中所包括的人脸图像的数量未达到预设数量时,则说明可以在第一图像组对应的参考人脸图像组中添加新的人脸图像而不会在计算第二特征向量时产生严重的累计误差,且所添加的新的人脸图像所对应的人员身份与该第一图像组所对应的人员身份相同。
根据一个或多个实施例,第一图像组对应的参考人脸图像组中可以包括预设数量个人脸图像,而当第一图像组对应的参考人脸图像组中所包括的人脸图像的数量未达到预设数量时,则说明可以在第一图像组对应的参考人脸图像组中添加新的人脸图像,且所添加的新的人脸图像所对应的人员身份与该第一图像组所对应的人员身份相同。因此,在第一图像组中所包括的人脸图像的数量未达到预设数量,且待识别人脸图像与第一图像组的差异值小于第三门限值的情况下,可以将待识别人脸图像添加至第一图像组,从而,实现利用待识别人脸图像更新第一图像组对应的参考人脸图像组。其中,在将待识别人脸图像添加至第一图像组对应的参考人脸图像组后,第一图像组中所包括的人脸图像的数量不大于预设数量。
其中,预设数量的具体数值可以是基于误差累积规律以及对人脸识别的准确率要求确定的,对此,本发明实施例不对上述预设数量的具体数值进行限定,例如,上述预设数量可以为2、3等。
根据一个或多个实施例,上述步骤2中,利用待识别人脸图像更新第一图像组对应的参考人脸图像组可以包括如下步骤21和步骤23-25:
步骤21:判断第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;如果是,执行步骤23。
步骤23:分别获取第一图像组对应的参考人脸图像组中的各个人脸图像与第一图像组的第一差异值,并获取待识别人脸与第一图像组的第二差异值。
步骤24:判断第一图像组对应的参考人脸图像组中是否存在第一差异值大于第二差异值的人脸图像;若是,执行步骤25。
步骤25:删除第一图像组对应的参考人脸图像组中最大的第一差异值对应的人脸图像,并将待识别人脸图像添加至第一图像组对应的参考人脸图像组。
根据一个或多个实施例,第一图像组对应的参考人脸图像组可以包括预设数量个人脸图像,而当第一图像组对应的参考人脸图像组中所包括的人脸图像的数量达到预设数量时,则说明若利用待识别人脸图像更新第一图像组对应的参考人脸图像组,则需要将第一图像组对应的参考人脸图像组中的某张人脸图像替换为待识别人脸图像,以保证更新后的第一图像组对应的参考人脸图像组中的人脸图像的数量不超过预设数量。
基于此,当判断得到第一图像组对应的参考人脸图像组中所包括的人脸图像的数量达到预设数量时,则不能直接将待识别人脸图像添加至第一图像组对应的参考人脸图像组中,而需要首先在第一图像组对应的参考人脸图像组中的所有人脸图像中确定一张待替换的人脸图像,从而,将第一图像组对应的参考人脸图像组中的该待替换的人脸图像的人脸图像替换为待识别人脸图像,也就是说,删除第一图像组对应的参考人脸图像组中的该待替换的人脸图像,并将待识别人脸图像添加至删除后的第一图像组对应的参考人脸图像组中。
为了确定上述待替换的人脸图像,在判断得到第一图像组对应的参考人脸图像组中所包括的人脸图像的数量达到预设数量时,可以获取第一图像组对应的参考人脸图像组中的各个人脸图像与第一图像组的第一差异值,以及待识别人脸图像与第一图像组的第二差异值。其中,待识别人脸图像与第一图像组的第二差异值即为执行上述步骤S101中,计算待识别人脸图像与各参考人脸图像 组的差异值时,所计算得到的待识别人脸图像与第一图像组的差异值。
进而,第一图像组对应的参考人脸图像组中的各个人脸图像与第一图像组的第一差异值,可以通过计算第一图像组对应的参考人脸图像组中的各个人脸图像的第三特征向量与第一图像组的第二特征向量的相似度进行确定。并且,该确定过程与上述计算待识别人脸图像与各参考人脸图像组的差异值的过程类似,在此不再赘述。
在获取到上述第二差异值和各个第一差异值后,便可以确定第二差异值与每个第一差异值的大小关系,从而,判断第一图像组对应的参考人脸图像组中是否存在第一差异值大于第二差异值的人脸图像。其中,当某个第一差异值大于第二差异值时,则可以说明该第一差异值所对应的人脸图像与第一图像组的差异,大于待识别人脸图像与第一图像组的差异,也就是说,相对于该第一差异值所对应的人脸图像,待识别人脸图像与第一图像组的相似度更高。这样,该第一差异值所对应的人脸图像便可能作为待替换的人脸图像。
可以理解的,第一差异值越大,说明该第一差异值所对应图像与第一图像组的相似度越低,从而,在判断得到第一图像组中存在第一差异值大于第二差异值的人脸图像时,便可以删除第一图像组对应的参考人脸图像组中最大的第一差异值对应的人脸图像,并将待识别人脸图像添置至删除后的第一图像组对应的参考人脸图像组。
显然,当存在多张第一差异值大于第二差异值的人脸图像时,则最大的第一差异值是大于第二差异值的,从而,该最大的第一差异值对应的人脸图像即可以作为待替换的人脸图像,从而,利用待识别人脸图像替换第一图像组对应的参考人脸图像组中,该最大的第一差异值对应的人脸图像;当存在一张第一差异值大于第二差异值的人脸图像时,则可以直接利用待识别人脸图像替换该人脸图像。
通常,人员的各种证件中的人脸图像时清晰度较高,且能够较为准确和全面地表征该用户的人脸特征的图像,例如,身份证中的人脸图像、驾驶证中的人脸图像等,因此,为了提高人脸识别的准确性,通常可以将人员的证件中的人脸图像作为各个参考人脸图像组中的人脸图像。
基于此,根据一个或多个实施例,第一图像组包括基准人脸图像组以及动态人脸图像组,即第一图像组对应的参考人脸图像组包括基准人脸图像组以及动态人脸图像组,且基准人脸图像组包括至少一张基准人脸图像;
相应的,根据一个或多个实施例,上述步骤2中,利用待识别人脸图像更新第一图像组对应的参考人脸图像组可以包括如下步骤26:
步骤26:保持基准人脸图像组,并利用待识别人脸图像更新动态人脸图像组。
根据一个或多个实施例,基准人脸图像组中的人脸图像可以为在构建第一图像组时指定的且不可删除的人脸图像,这些人脸图像通常可以为清晰度较高,且能够较为准确和全面地表征该用户的人脸特征的图像,例如,身份证中的人脸图像、驾驶证中的人脸图像等。
动态人脸图像组中的人脸图像可以是在人脸识别过程中,逐渐添加的所采集到的待识别人脸图像,并且,这些人脸图像可以被删除替换,从而,替换为新的人脸图像。
基于此,在利用待识别人脸图像更新第一图像组对应的参考人脸图像组时,所更新的是第一图像组中的动态人脸图像组。
例如,当第一图像组对应的参考人脸图像组中所包括的人脸图像的数量达到预设数量时,可以获取动态人脸图像组中各个人脸图像与第一图像组的第一差异值,并判断该动态人脸图像组中是否存在第一差异值大于待识别人脸图像与第一图像组的第二差异值的人脸图像,并在判断结果为存在时,删除该动态人脸图像组中最大的第一差异值对应的人脸图像,并将待识别人脸图像添加至该动态人脸图像组。显然,待识别人脸图像所替换的人脸图像是该动态人脸图像组中的人脸图像,且在后续的人脸识别过程中,待识别人脸图像也可能被删除替换。
可以理解的,在很多情况下,由于人员的位置状态的影响,所获取到的待识别人脸图像中的人脸区域可能为非垂直区域。而由于待识别人脸图像本身是垂直的,则非垂直的人脸区域将可以影响人脸识别的准确性,则为了提高人脸识别的准确性,当待识别人脸图像中的人脸区域为非垂直区域时,可以对待识别人脸图像进行对齐,得到对齐后的待识别人脸图像。其中,该对齐后的待识别人脸图像中的人脸区域为垂直区域。
例如,如图4(a)所示,该待识别人脸图像中的人脸区域为非垂直区域,则对图4(a)进行对齐,得到如图4(b)所示的对齐后的待识别人脸图像,其中,如图4(b)所示的对齐后的待识别人脸图像中的人脸区域为垂直区域。
基于此,根据一个或多个实施例,当待识别人脸图像中的人脸区域为非垂 直区域时,在执行上述步骤S101中,计算待识别人脸图像与各参考人脸图像组的差异值的步骤之前,本发明实施例提供的一种人脸识别方法还可以包括如下步骤3:
步骤3:对待识别人脸图像进行对齐。
相应的,根据一个或多个实施例,上述步骤S101中,计算待识别人脸图像与各参考人脸图像组的差异值的步骤即可以包括如下步骤1010:
步骤1010:计算对齐后的待识别人脸图像与各参考人脸图像组的差异值。
并且,上述步骤S102中的第一图像组即可以为与对齐后的待识别人脸图像的差异值小于第一门限值的参考人脸图像组。
上述步骤S104中的第二图像组即可以为与对齐后的待识别人脸图像的差异值不小于第一门限值,且小于第二门限值的参考人脸图像组。
其中,根据一个或多个实施例,上述步骤3中,对待识别人脸图像进行对齐的方式可以为根据所检测到的待识别人脸图像的人脸区域中的关键点计算变换矩阵,从而,通过变换矩阵将非垂直的待识别人脸图像转换为垂直的待识别人脸图像,即通过变换矩阵对待识别人脸图像进行对齐。
具体而言,由于垂直的人脸中左右脸颊、眼睛和嘴巴左右嘴角应该是对称的,例如两个眼睛所对应的关键点应该是处于同一水平线上的,也就是说,在待识别人脸图像对应的二维坐标系中,两个眼睛所对应的关键点的y坐标相同,即在待识别人脸图像对应的二维坐标系中,两个眼睛所对应的关键点的纵轴坐标相同。
基于上述规则,由于在对待识别人脸图像的人脸区域中的关键点进行检测时,所得到的人脸区域检测框的大小固定,因此,所检测到的关键点的相对位置可以是固定预知的,从而,只需要对待识别人脸图像中的人脸区域进行缩放、平移和旋转即可得到垂直的人脸区域。
需要说明的是,缩放、平移和旋转都是针对待识别人脸图像中的像素进行的,由于每个待识别人脸图像可以理解为是一个带坐标的二维矩阵,因此,通过下面的坐标进行变换。
其中,缩放矩阵为:
Figure PCTCN2021113209-appb-000005
其中,c为缩放比例;
平移矩阵为:
Figure PCTCN2021113209-appb-000006
其中,t x和t y分别为在待识别人脸图像对应的二维坐标系中的横轴水平位移和纵轴垂直位移;
旋转矩阵为:
Figure PCTCN2021113209-appb-000007
其中,θ为待旋转的角度。
进而,基于上述缩放矩阵、平移矩阵和旋转矩阵,假设,最终所得到的垂直的人脸区域中的关键点的矩阵为[X,Y],检测到的非垂直的人脸区域中的关键点的矩阵为[x,y],并假设,b=sinθ,a=c*cosθ,则可以得到如下结果:
Figure PCTCN2021113209-appb-000008
进一步的,上述结果可以变形为:
Figure PCTCN2021113209-appb-000009
再进一步的,利用最小二乘法,求得上述a、b、t x和t y即可。
这样,在求得上述a、b、t x和t y后,便可以利用上述所得到的结果:
Figure PCTCN2021113209-appb-000010
对待识别人脸图像中的非垂直的人脸区域进行转换,得到垂直的人脸区域。
根据一个或多个实施例,对于被跟踪的人员而言,在跟踪过程中,可以存储所获取到的该人员的每张人脸图像。
基于此,根据一个或多个实施例,上述步骤S105中,判断待识别人脸图像对应的人员是否为被跟踪人员可以包括如下步骤1051-1055:
步骤1051:从第三图像组中确定一待跟踪人脸图像,其中,第三图像组是由获取待识别人脸图像前,预设时间范围内获取的人脸图像形成的,
步骤1052:计算待跟踪人脸图像与各参考人脸图像组的第三差异值。
步骤1053:根据第三差异值判断是否存在第四图像组;若存在第四图像组,执行步骤1054;其中,第四图像组为与待跟踪人脸图像的第三差异值小于第一 门限值的参考人脸图像组。
步骤1054:判断第四图像组与第二图像组是否为同一参考人脸图像组;若是,执行步骤1055。
步骤1055:确定待识别人脸图像对应的人员为被跟踪人员。
根据一个或多个实施例,可以设置一第三图像组用于存储获取待识别人脸图像前,预设时间范围内获取的人脸图像,从而,在判断待识别人脸图像所对应的人员是否为被跟踪人员时,便可以从该第三图像组中获取待跟踪人脸图像,并进一步计算所获取到的待跟踪人脸图像与各参考人员图像组的第三差异值;并根据各个第三差异值,确定与待跟踪人脸图像的第三差异值小于第一门限值的参考人脸图像组,即确定第四图像组。
显然,由于待跟踪人脸图像与第四图像组的第三差异值小于第一门限值,因此,待跟踪人脸图像所对应的人员身份与第四图像组所对应的人员身份是相同的。从而,如果第四图像组和第二图像组为同一参考人脸图像组,那么,由于待跟踪人脸图像所对应的人员为被跟踪人员,则可以说明第二图像组所对应的人员为被跟踪人员。进而,由于待识别人脸图像与第二图像组的差异值小于第二门限值,即待识别人脸图像所对应的人员身份与第二图像组所对应的人员身份相同的概率较高,从而,可以说明待识别人脸图像所对应的人员为被跟踪人员。
相应于上述本发明实施例提供的一种人脸识别方法,本发明实施例还提供了一种人脸识别装置。
图5为本发明实施例提供的一种人脸识别装置的结构示意图,如图5所示,该装置可以包括如下模块:
差异值计算模块510,用于获取待识别人脸图像,并计算待识别人脸图像与各参考人脸图像组的差异值;其中,每一所述参考人脸图像组包括的各个图像对应于同一人员信息;
第一图像组判断模块520,用于根据差异值判断是否存在第一图像组,若存在第一图像组,触发第一结果确定模块530,否则,触发第二图像组判断模块540,其中,第一图像组为与待识别人脸图像的差异值小于第一门限值的参考人脸图像组;
所述第一结果确定模块530,用于基于所述第一图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份;
所述第二图像组判断模块540,用于根据所述差异值判断是否存在第二图像组,若存在所述第二图像组,则触发跟踪人员判断模块550;
所述跟踪人员判断模块550,用于判断所述待识别人脸图像对应的人员是否为被跟踪人员,若是跟踪人员,则触发第二结果确定模块560;
所述第二结果确定模块560,用于基于所述第二图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份。
以上可见,在本发明实施例提供的方案中,利用回退阈值机制,通过第二门限值的判断,可以在无法通过第一门限值确定待识别人脸图像的识别结果的情况下,实现对待识别人脸图像的成功识别。从而,在对待识别人脸图像所对应人员的持续识别过程中,可以减低出现由于该人员的位置状态变化而导致的识别成功与识别失败交替出现的情况的可能性。这样,便可以提高人脸识别的鲁棒性,以实现持续地为待识别人脸图像对应的人员提供个性化响应动作,提高该人员的用户体验。
相应于上述本发明实施例提供的一种人脸识别方法,本发明实施例还提供了一种电子设备,如图6所示,包括处理器601、通信接口602、存储器603和通信总线604,其中,处理器601,通信接口602,存储器603通过通信总线604完成相互间的通信,
存储器603,用于存放计算机程序。
处理器601,用于在执行存储器603上所存放的程序时,实现上述本发明实施例提供的任一人脸识别方法的步骤。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry StandardArchitecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器 (Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
根据一个或多个实施例,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,该计算机程序被处理器执行时实现上述本发明实施例提供的任一人脸识别方法的步骤。
根据一个或多个实施例,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述本发明实施例提供的任一人脸识别方法的步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例、电子设备实施例、计算机可读存储介质实施例以及计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本发明的一个或多个实施例可以具有以下一个或多个有益效果:
根据一个或多个实施例,针对待识别人脸图像,当待识别人脸图像与各参考人脸图像组的差异值均不小于第一门限值,导致无法通过一次判断得到待识别人脸图像的识别结果时,可以进一步判断各参考人脸图像组中,是否存在与待识别人脸图像的差异值不小于第一门限值,且小于第二门限值的第二图像组。进而,当判断结果为存在第二图像组时,便可以再进一步判断待识别人脸图像对应的人员是否为被跟踪人员,从而,在判断出该人员为被跟踪人员时,便可以确定待识别人脸图像对应的人员即为第二图像组所对应的人员,从而,便可以基于第二图像组对应的人员信息,确定待识别人脸图像对应的人员身份,实现对待识别人脸图像的成功识别。
根据一个或多个实施例,由于待识别人脸图像为被跟踪人员,因此,在该待识别人脸图像之前,可以实现对所采集到的该人员的其他人脸图像的成功识别。从而,在对待识别人脸图像的识别过程中,即使该待识别人脸图像与各参考人脸图像组的差异值均不小于第一门限值,那么,当存在上述第二图像组时,可以说明该待识别人脸图像对应的人员即为被跟踪的第二图像组所对应的人员,且该待识别人脸图像可以为所对应人员的非正面的人脸图像。
也就是说,当待识别人脸图像所对应的人员为被跟踪人员时,即使某次进行人脸识别所采用的待识别人脸图像为该人员的非正面的人脸图像的情况下,仍然可以依据第一门限值和第二门限值的双重限制条件,实现对该人员的成功识别。从而,在对待识别人脸图像所对应人员的持续识别过程中,可以减低出现由于该人员的位置状态变化而导致的识别成功与识别失败交替出现的情况的可能性。其中,上述依据第一门限值和第二门限值的双重限制条件对待识别人脸图像进行识别的方式,可以称为回退阈值机制。
基于此,根据一个或多个实施例,利用回退阈值机制,通过第二门限值的判断,可以在无法通过第一门限值确定待识别人脸图像的识别结果的情况下,实现对待识别人脸图像的成功识别。从而,在对待识别人脸图像所对应人员的 持续识别过程中,可以减低出现由于该人员的位置状态变化而导致的识别成功与识别失败交替出现的情况的可能性。这样,便可以提高人脸识别的鲁棒性,以实现持续地为待识别人脸图像对应的人员提供个性化响应动作,提高该人员的用户体验。
尽管已经针对有限数量的实施例描述了本发明,但是受益于本公开的本领域普通技术人员将理解,可以设计其他实施例而不脱离本文所公开的本发明的范围。因此,本发明的范围应仅由所附权利要求书限制。

Claims (17)

  1. 一种人脸识别方法,包括:
    获取待识别人脸图像,并计算所述待识别人脸图像与各参考人脸图像组的差异值;其中,每一所述参考人脸图像组包括的各个图像对应于同一人员信息;
    根据所述差异值判断是否存在第一图像组,其中,所述第一图像组为与所述待识别人脸图像的差异值小于第一门限值的参考人脸图像组;
    当存在第一图像组时,基于所述第一图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份;
    当不存在所述第一图像组时,根据所述差异值判断是否存在第二图像组,其中,所述第二图像组为与所述待识别人脸图像的差异值不小于所述第一门限值,且小于第二门限值的参考人脸图像组,其中,所述第一门限值小于所述第二门限值;
    若存在所述第二图像组,则判断所述待识别人脸图像对应的人员是否为被跟踪人员;
    如果判断所述待识别人脸图像对应的人员为所述被跟踪人员,基于所述第二图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份。
  2. 根据权利要求1所述的方法,其中,所述计算所述待识别人脸图像与各组参考人脸图像的差异值的步骤,包括:
    分别计算所述待识别人脸图像与各所述参考人脸图像组的相似度;
    基于所述相似度,分别计算所述待识别人脸图像与各所述参考人脸图像组的差异值。
  3. 根据权利要求2所述的方法,其中,所述计算所述待识别人脸图像与各所述参考人脸图像组的相似度的步骤,包括:
    提取所述待识别人脸图像的第一特征向量;
    分别计算各所述参考人脸图像组的第二特征向量,其中,针对每一所述参考人脸图像组,该参考人脸图像组的所述第二特征向量是基于该参考人脸图像组中所有人脸图像的第三特征向量获取的;
    分别计算所述待识别人脸图像的所述第一特征向量与各所述参考人脸图像组的所述第二特征向量的相似度,作为所述待识别人脸图像与各所述参考人脸图像组的所述相似度。
  4. 根据权利要求3所述的方法,还包括:
    当存在所述第一图像组时,判断所述待识别人脸图像与所述第一图像组的差异值是否小于第三门限值,其中,所述第三门限值小于所述第一门限值;
    如果判断所述待识别人脸图像与所述第一图像组的差异值小于第三门限值,利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组。
  5. 根据权利要求4所述的方法,其中,所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组的步骤,包括:
    判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
    如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量未达到预设数量,将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
  6. 根据权利要求4所述的方法,其中,所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组的步骤,包括:
    判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
    如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量达到预设数量,分别获取所述第一图像组对应的参考人脸图像组中的各所述个人脸图像与所述第一图像组的第一差异值,并获取所述待识别人脸与所述第一图像组的第二差异值;
    判断所述第一图像组对应的参考人脸图像组中是否存在所述第一差异值大于所述第二差异值的人脸图像;
    若判断所述第一图像组对应的参考人脸图像组中存在所述第一差异值大于所述第二差异值的人脸图像,删除所述第一图像组对应的参考人脸图像组中最大的所述第一差异值对应的人脸图像,并将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
  7. 根据权利要求4所述的方法,其中,所述第一图像组包括基准人脸图像组以及动态人脸图像组,其中,所述基准人脸图像组包括至少一张基准人脸图像,其中,所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组的步骤,包括:
    保持所述基准人脸图像组,并利用所述待识别人脸图像更新所述动态人脸图像组。
  8. 根据权利要求1所述的方法,其中,所述判断所述待识别人脸图像对应的人员是否为被跟踪人员的步骤,包括:
    从第三图像组中确定一待跟踪人脸图像,其中,所述第三图像组是由获取所述待识别人脸图像前,预设时间范围内获取的人脸图像形成的;
    计算所述待跟踪人脸图像与各所述参考人脸图像组的第三差异值;
    根据所述第三差异值判断是否存在第四图像组,其中,所述第四图像组为与所述待跟踪人脸图像的差异值小于所述第一门限值的参考人脸图像组;
    若存在所述第四图像组,判断所述第四图像组与所述第二图像组是否为同一参考人脸图像组;
    若判断所述第四图像组与所述第二图像组为同一参考人脸图像组,则确定所述待识别人脸图像对应的人员为所述被跟踪人员。
  9. 一种人脸识别装置,包括:
    差异值计算模块,用于获取待识别人脸图像,并计算所述待识别人脸图像与各参考人脸图像组的差异值,其中,每一所述参考人脸图像组包括的各个图像对应于同一人员信息;
    第一图像组判断模块,用于根据所述差异值判断是否存在第一图像组,其中,若存在所述第一图像组,所述第一图像组判断模块触发第一结果确定模块,否则,触发第二图像组判断模块,其中,所述第一图像组为与所述待识别人脸图像的差异值小于第一门限值的参考人脸图像组;
    所述第一结果确定模块,用于基于所述第一图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份;
    所述第二图像组判断模块,用于根据所述差异值判断是否存在第二图像组,其中,若存在所述第二图像组,所述第二图像组判断模块触发跟踪人员判断模块;
    所述跟踪人员判断模块,用于判断所述待识别人脸图像对应的人员是否为被跟踪人员,其中,若判断所述待识别人脸图像对应的人员为被跟踪人员,所述跟踪人员判断模块触发第二结果确定模块;
    所述第二结果确定模块,用于基于所述第二图像组对应的人员信息,确定所述待识别人脸图像对应的人员身份。
  10. 根据权利要求9所述的人脸识别装置,其中,在执行所述计算所述待识别人脸图像与各组参考人脸图像的差异值时,所述差异值计算模块用于:
    分别计算所述待识别人脸图像与各所述参考人脸图像组的相似度;
    基于所述相似度,分别计算所述待识别人脸图像与各所述参考人脸图像组的差异值。
  11. 根据权利要求10所述的人脸识别装置,其中,在执行所述计算所述待识别人脸图像与各所述参考人脸图像组的相似度时,所述差异值计算模块用于:
    提取所述待识别人脸图像的第一特征向量;
    分别计算各所述参考人脸图像组的第二特征向量,其中,针对每一所述参考人脸图像组,该参考人脸图像组的所述第二特征向量是基于该参考人脸图像组中所有人脸图像的第三特征向量获取的;
    分别计算所述待识别人脸图像的所述第一特征向量与各所述参考人脸图像组的所述第二特征向量的相似度,作为所述待识别人脸图像与各所述参考人脸图像组的所述相似度。
  12. 根据权利要求11所述的人脸识别装置,所述第一图像组判断模块还用于:
    当存在所述第一图像组时,判断所述待识别人脸图像与所述第一图像组的差异值是否小于第三门限值,其中,所述第三门限值小于所述第一门限值;
    如果判断所述待识别人脸图像与所述第一图像组的差异值小于第三门限值,利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组。
  13. 根据权利要求12所述的人脸识别装置,其中,在执行所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组时,所述第一图像组判断模块用于:
    判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
    如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量未达到预设数量,将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
  14. 根据权利要求12所述的人脸识别装置,其中,在执行所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组时,所述第一图像组判断模块用于:
    判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量是否达到预设数量;
    如果判断所述第一图像组对应的参考人脸图像组所包括的人脸图像的数量达到预设数量,分别获取所述第一图像组对应的参考人脸图像组中的各所述个人脸图像与所述第一图像组的第一差异值,并获取所述待识别人脸与所述第一图像组的第二差异值;
    判断所述第一图像组对应的参考人脸图像组中是否存在所述第一差异值大于所述第二差异值的人脸图像;
    若判断所述第一图像组对应的参考人脸图像组中存在所述第一差异值大于所述第二差异值的人脸图像,删除所述第一图像组对应的参考人脸图像组中最大的所述第一差异值对应的人脸图像,并将所述待识别人脸图像添加至所述第一图像组对应的参考人脸图像组。
  15. 根据权利要求12所述的人脸识别装置,其中,所述第一图像组包括基准人脸图像组以及动态人脸图像组,其中,所述基准人脸图像组包括至少一张基准人脸图像,其中,在执行所述利用所述待识别人脸图像更新所述第一图像组对应的参考人脸图像组时,所述第一图像组判断模块用于:
    保持所述基准人脸图像组,并利用所述待识别人脸图像更新所述动态人脸图像组。
  16. 根据权利要求9所述的人脸识别装置,其中,在执行所述判断所述待识别人脸图像对应的人员是否为被跟踪人员时,所述跟踪人员判断模块用于:
    从第三图像组中确定一待跟踪人脸图像,其中,所述第三图像组是由获取所述待识别人脸图像前,预设时间范围内获取的人脸图像形成的;
    计算所述待跟踪人脸图像与各所述参考人脸图像组的第三差异值;
    根据所述第三差异值判断是否存在第四图像组,其中,所述第四图像组为与所述待跟踪人脸图像的差异值小于所述第一门限值的参考人脸图像组;
    若存在所述第四图像组,判断所述第四图像组与所述第二图像组是否为同一参考人脸图像组;
    若判断所述第四图像组与所述第二图像组为同一参考人脸图像组,则确定所述待识别人脸图像对应的人员为所述被跟踪人员。
  17. 一种电子设备,包括:
    通信接口;
    通信总线;
    存储器,用于存放计算机程序;
    处理器,用于执行所述存储器上所存放的程序,以实现权利要求1-8中任一项所述的方法,
    其中,所述处理器、所述通信接口和所述存储器通过所述通信总线完成相互间的通信。
PCT/CN2021/113209 2020-12-31 2021-08-18 一种人脸识别方法、装置及电子设备 WO2022142375A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011617404.7A CN112287918B (zh) 2020-12-31 2020-12-31 一种人脸识别方法、装置及电子设备
CN202011617404.7 2020-12-31

Publications (1)

Publication Number Publication Date
WO2022142375A1 true WO2022142375A1 (zh) 2022-07-07

Family

ID=74425127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113209 WO2022142375A1 (zh) 2020-12-31 2021-08-18 一种人脸识别方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN112287918B (zh)
WO (1) WO2022142375A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287918B (zh) * 2020-12-31 2021-03-19 湖北亿咖通科技有限公司 一种人脸识别方法、装置及电子设备
CN112818901B (zh) * 2021-02-22 2023-04-07 成都睿码科技有限责任公司 一种基于眼部注意力机制的戴口罩人脸识别方法
CN113459975B (zh) * 2021-07-31 2022-10-04 重庆长安新能源汽车科技有限公司 一种智能座舱系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902644A (zh) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 人脸识别方法、装置、设备和计算机可读介质
CN110728234A (zh) * 2019-10-12 2020-01-24 爱驰汽车有限公司 驾驶员人脸识别方法、系统、设备及介质
US20200285837A1 (en) * 2015-06-24 2020-09-10 Samsung Electronics Co., Ltd. Face recognition method and apparatus
CN112287918A (zh) * 2020-12-31 2021-01-29 湖北亿咖通科技有限公司 一种人脸识别方法、装置及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734071B2 (en) * 2003-06-30 2010-06-08 Honda Motor Co., Ltd. Systems and methods for training component-based object identification systems
US7978936B1 (en) * 2006-01-26 2011-07-12 Adobe Systems Incorporated Indicating a correspondence between an image and an object
US8254646B2 (en) * 2010-01-25 2012-08-28 Apple Inc. Image preprocessing
CN101964064B (zh) * 2010-07-27 2013-06-19 上海摩比源软件技术有限公司 一种人脸比对方法
CN106446754A (zh) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 图像识别方法、度量学习方法、图像来源识别方法及装置
CN107609383B (zh) * 2017-10-26 2021-01-26 奥比中光科技集团股份有限公司 3d人脸身份认证方法与装置
CN108614894B (zh) * 2018-05-10 2021-07-02 西南交通大学 一种基于最大生成树的人脸识别数据库构成方法
CN111898413A (zh) * 2020-06-16 2020-11-06 深圳市雄帝科技股份有限公司 人脸识别方法、装置、电子设备和介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200285837A1 (en) * 2015-06-24 2020-09-10 Samsung Electronics Co., Ltd. Face recognition method and apparatus
CN109902644A (zh) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 人脸识别方法、装置、设备和计算机可读介质
CN110728234A (zh) * 2019-10-12 2020-01-24 爱驰汽车有限公司 驾驶员人脸识别方法、系统、设备及介质
CN112287918A (zh) * 2020-12-31 2021-01-29 湖北亿咖通科技有限公司 一种人脸识别方法、装置及电子设备

Also Published As

Publication number Publication date
CN112287918B (zh) 2021-03-19
CN112287918A (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
WO2022142375A1 (zh) 一种人脸识别方法、装置及电子设备
US11915515B2 (en) Facial verification method and apparatus
US12087087B2 (en) Face verifying method and apparatus
US11449971B2 (en) Method and apparatus with image fusion
CN108154171B (zh) 一种人物识别方法、装置及电子设备
WO2023016271A1 (zh) 位姿确定方法、电子设备及可读存储介质
CN110728234A (zh) 驾驶员人脸识别方法、系统、设备及介质
US20210323544A1 (en) Method and apparatus for vehicle driving assistance
EP2370932B1 (en) Method, apparatus and computer program product for providing face pose estimation
US20200302715A1 (en) Face authentication based smart access control system
TW202232367A (zh) 人臉識別方法、裝置、設備及存儲介質
US11250269B2 (en) Recognition method and apparatus for false detection of an abandoned object and image processing device
CN111898428A (zh) 一种基于orb的无人机特征点匹配方法
WO2023273616A1 (zh) 图像识别方法及装置、电子设备、存储介质
Sun et al. Vehicle Type Recognition Combining Global and Local Features via Two‐Stage Classification
WO2024093665A1 (zh) 身份识别图像处理方法、装置、计算机设备和存储介质
WO2020244076A1 (zh) 人脸识别方法、装置、电子设备及存储介质
KR20180105035A (ko) 지문 인증 방법 및 장치
CN112686129B (zh) 一种人脸识别系统及方法
CN113313034B (zh) 人脸识别方法、装置、电子设备及存储介质
KR102301785B1 (ko) 얼굴 연속 인증을 위한 방법 및 장치
US11429976B1 (en) Customer as banker system for ease of banking
KR20230030996A (ko) 객체 추적 장치 및 방법
CN111428679B (zh) 影像识别方法、装置和设备
US20220318358A1 (en) Method and apparatus for continuous authentication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913123

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21913123

Country of ref document: EP

Kind code of ref document: A1