CN112287918B - Face recognition method and device and electronic equipment - Google Patents

Face recognition method and device and electronic equipment Download PDF

Info

Publication number
CN112287918B
CN112287918B CN202011617404.7A CN202011617404A CN112287918B CN 112287918 B CN112287918 B CN 112287918B CN 202011617404 A CN202011617404 A CN 202011617404A CN 112287918 B CN112287918 B CN 112287918B
Authority
CN
China
Prior art keywords
face image
image group
recognized
face
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011617404.7A
Other languages
Chinese (zh)
Other versions
CN112287918A (en
Inventor
李林峰
黄海荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN202011617404.7A priority Critical patent/CN112287918B/en
Publication of CN112287918A publication Critical patent/CN112287918A/en
Application granted granted Critical
Publication of CN112287918B publication Critical patent/CN112287918B/en
Priority to PCT/CN2021/113209 priority patent/WO2022142375A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the invention provides a face recognition method, a face recognition device and electronic equipment, and is applied to the technical field of artificial intelligence. The method comprises the following steps: acquiring a face image to be recognized, and calculating difference values of the face image to be recognized and each reference face image group; judging whether a first image group exists according to the difference value; when the first image group exists, acquiring the personnel identity corresponding to the face image to be recognized based on the personnel information corresponding to the first image group; when the first image group does not exist, judging whether a second image group exists or not; if the second image group exists, judging whether the person corresponding to the face image to be recognized is a tracked person or not; and if the face image is the tracked person, obtaining the person identity corresponding to the face image to be recognized based on the person information corresponding to the second image group. Compared with the prior art, the scheme provided by the embodiment of the invention can improve the robustness of face recognition, and further improve the user experience of drivers and passengers.

Description

Face recognition method and device and electronic equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a face recognition method, a face recognition device and electronic equipment.
Background
Currently, in order to facilitate a driver and a passenger to check the conditions of the inside and outside environment of a vehicle in the driving process of the vehicle so as to improve the driving safety of the vehicle, a vehicle-mounted camera can be usually installed in the vehicle.
The vehicle-mounted camera can acquire the face image of the driver and the crew in the vehicle in real time in the driving process of the vehicle, so that the identity of the driver and the crew is identified according to the acquired face image, personalized response actions can be provided according to the identity of the driver and the crew, and the user experience of the driver and the crew is improved.
For example, the seat back tilt angle, the mirror position, and the like are automatically adjusted according to the habits of different occupants, and for example, song recommendation and the like are automatically performed according to the preferences of different occupants.
In the related art, the vehicle-mounted camera collects face images of drivers and passengers in real time, and performs face recognition on each frame of face image, however, when a certain face image is collected, the drivers and passengers may be in a low head state, a side face state and a vehicle window facing state, and the like, so that the face images collected by the vehicle-mounted camera cannot collect the face images on the front sides of the drivers and passengers, and further, the identity recognition of the drivers and passengers fails; and when the next face image is collected, because the face of the driver faces the vehicle-mounted camera, the vehicle-mounted camera can collect the face image of the front face of the driver, so that the identity of the driver can be successfully identified.
Obviously, this may occur many times during the face recognition process for the occupant, as the occupant may be in different location states.
Therefore, in the related art, for the same driver, in the process of continuously recognizing the face image of the driver, the situation that the recognition success and the recognition failure alternately occur may occur, that is, the robustness of the face recognition is poor, so that the personalized response action cannot be continuously provided for the driver, and the user experience of the driver is seriously affected.
Disclosure of Invention
The embodiment of the invention aims to provide a face recognition method, a face recognition device and electronic equipment, so as to improve the robustness of face recognition and further improve the user experience of drivers and passengers.
The specific technical scheme is as follows: in a first aspect, an embodiment of the present invention provides a face recognition method, where the method includes:
acquiring a face image to be recognized, and calculating difference values between the face image to be recognized and each reference face image group; each image included in each reference face image group corresponds to the same person information;
judging whether a first image group exists according to the difference value, wherein the first image group is as follows: a reference face image group with the difference value with the face image to be recognized smaller than a first threshold value;
when a first image group exists, determining the identity of a person corresponding to the face image to be recognized based on the person information corresponding to the first image group;
when the first image group does not exist, judging whether a second image group exists according to the difference value; wherein the second image group is: the difference value between the reference face image group and the face image to be recognized is not smaller than the first threshold value and is smaller than the second threshold value; the first threshold value is smaller than the second threshold value;
if the second image group exists, judging whether the person corresponding to the face image to be recognized is a tracked person or not;
and if the face image is the tracked person, determining the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the second image group.
Optionally, in a specific implementation manner, the step of calculating a difference value between the to-be-recognized face image and each group of reference face images includes:
respectively calculating the similarity between the face image to be recognized and each reference face image group;
and respectively calculating difference values of the face image to be recognized and each reference face image group based on the similarity.
Optionally, in a specific implementation manner, the step of calculating the similarity between the facial image to be recognized and each reference facial image group includes:
extracting a first feature vector of the facial image to be recognized;
respectively calculating a second feature vector of each reference face image group; aiming at each reference face image group, the second feature vector of the reference face image group is obtained based on the third feature vectors of all face images in the reference face image group;
and respectively calculating the similarity between the first characteristic vector of the face image to be recognized and the second characteristic vector of each reference face image group as the similarity between the face image to be recognized and each reference face image group.
Optionally, in a specific implementation manner, the method further includes:
when the first image group exists, judging whether the difference value between the face image to be recognized and the first image group is smaller than a third threshold value; wherein the third threshold is less than the first threshold;
and if so, updating the reference face image group corresponding to the first image group by using the face image to be recognized.
Optionally, in a specific implementation manner, the step of updating the reference face image group corresponding to the first image group by using the face image to be recognized includes:
judging whether the number of the face images included in the reference face image group corresponding to the first image group reaches a preset number or not;
and if not, adding the face image to be recognized to the reference face image group corresponding to the first image group.
Optionally, in a specific implementation manner, the step of updating the reference face image group corresponding to the first image group by using the face image to be recognized includes:
judging whether the number of the face images included in the reference face image group corresponding to the first image group reaches a preset number or not;
if so, respectively acquiring first difference values of each face image in a reference face image group corresponding to the first image group and the first image group, and acquiring second difference values of the face to be recognized and the first image group;
judging whether a face image with the first difference value larger than the second difference value exists in a reference face image group corresponding to the first image group;
and if so, deleting the face image corresponding to the largest first difference value in the reference face image group corresponding to the first image group, and adding the face image to be recognized to the reference face image group corresponding to the first image group.
Optionally, in a specific implementation manner, the first image group includes a reference face image group and a dynamic face image group, where the reference face image group includes at least one reference face image; the step of updating the reference face image group corresponding to the first image group by using the face image to be recognized comprises the following steps:
and keeping the reference face image group, and updating the dynamic face image group by using the face image to be recognized.
Optionally, in a specific implementation manner, the step of determining whether the person corresponding to the face image to be recognized is a tracked person includes:
determining a face image to be tracked from the third image group; the third image group is formed by face images acquired within a preset time range before the face images to be recognized are acquired;
calculating a third difference value between the face image to be tracked and each reference face image group;
judging whether a fourth image group exists according to the third difference value, wherein the fourth image group is as follows: the difference value between the reference face image group and the face image to be tracked is smaller than the first threshold value;
if the fourth image group exists, judging whether the fourth image group and the second image group are the same reference human face image group;
and if so, determining that the person corresponding to the face image to be recognized is the tracked person.
In a second aspect, an embodiment of the present invention provides a face recognition apparatus, where the apparatus includes:
the difference value calculation module is used for acquiring a face image to be recognized and calculating the difference value between the face image to be recognized and each reference face image group; each image included in each reference face image group corresponds to the same person information;
the first image group judging module is used for judging whether a first image group exists according to the difference value, if so, the first result determining module is triggered, otherwise, the second image group judging module is triggered; wherein the first image group is: a reference face image group with the difference value with the face image to be recognized smaller than a first threshold value;
the first result determining module is used for determining the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the first image group;
the second image group judging module is used for judging whether a second image group exists according to the difference value, and if the second image group exists, the tracker judging module is triggered;
the tracker judging module is used for judging whether the person corresponding to the face image to be identified is a tracked person; if the person is tracked, triggering a second result determining module;
and the second result determining module is used for determining the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the second image group.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
and a processor, configured to implement the steps of any one of the face recognition methods provided in the first aspect when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of any one of the face recognition methods provided in the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer program product containing instructions, which when run on a computer, causes the computer to perform the steps of any of the face recognition methods provided in the first aspect.
The embodiment of the invention has the following beneficial effects:
as can be seen from the above, in the scheme provided in the embodiment of the present invention, for a face image to be recognized, when a difference value between the face image to be recognized and each reference face image group is not less than a first threshold value, so that a recognition result of the face image to be recognized cannot be obtained through one-time determination, it can be further determined whether a second image group exists in each reference face image group, where the difference value between the second image group and the face image to be recognized is not less than the first threshold value and is less than a second threshold value.
And then, when the judgment result is that the second image group exists, whether the person corresponding to the face image to be recognized is the tracked person can be further judged, so that when the person is judged to be the tracked person, the person corresponding to the face image to be recognized can be determined to be the person corresponding to the second image group, and therefore the identity of the person corresponding to the face image to be recognized can be determined based on the person information corresponding to the second image group, and the face image to be recognized can be successfully recognized.
The face image to be recognized is the tracked person, so that other collected face images of the person can be successfully recognized before the face image to be recognized.
Therefore, in the process of recognizing the face image to be recognized, even if the difference value between the face image to be recognized and each reference face image group is not smaller than the first threshold value, when the second image group exists, it can be stated that the person corresponding to the face image to be recognized is the person corresponding to the tracked second image group, and the face image to be recognized can be the non-frontal face image of the corresponding person.
That is to say, when the person corresponding to the face image to be recognized is the tracked person, even if the face image to be recognized used for face recognition at a certain time is a non-frontal face image of the person, successful recognition of the person can still be achieved according to the dual limiting conditions of the first threshold and the second threshold.
Therefore, in the continuous identification process of the person corresponding to the face image to be identified, the possibility that identification success and identification failure alternately occur due to the change of the position state of the person can be reduced.
The above-mentioned manner for recognizing the face image to be recognized according to the dual limitation conditions of the first threshold and the second threshold may be referred to as a back-off threshold mechanism.
Based on this, in the scheme provided by the embodiment of the present invention, by using a rollback threshold mechanism and through the judgment of the second threshold, the successful recognition of the facial image to be recognized can be realized under the condition that the recognition result of the facial image to be recognized cannot be determined through the first threshold. Therefore, in the continuous identification process of the person corresponding to the face image to be identified, the possibility that identification success and identification failure alternately occur due to the change of the position state of the person can be reduced.
Therefore, the robustness of face recognition can be improved, personalized response actions can be continuously provided for the personnel corresponding to the face image to be recognized, and the user experience of the personnel is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention;
FIG. 2(a) is a schematic diagram of a face detected with 5 key points;
FIG. 2(b) is a schematic diagram of 68 detected key points in a human face;
FIG. 2(c) is a schematic representation of the keypoint characterization of FIG. 2 (b);
FIG. 3 is a schematic diagram of a first eigenvector and a second eigenvector in a two-dimensional coordinate system;
FIG. 4(a) is a schematic diagram of a face region in a face image to be recognized being a non-vertical region;
fig. 4(b) is a schematic diagram of a face region in the aligned face image to be recognized, which is obtained after the alignment of fig. 4(a), being a vertical region;
fig. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the related art, in the process of continuously identifying the face image of each driver and passenger, the situations that the identification success and the identification failure alternately occur may occur, that is, the robustness of the face identification is poor, so that the personalized response action cannot be continuously provided for the driver and passenger, and the user experience of the driver and passenger is seriously affected.
In order to solve the above technical problem, an embodiment of the present invention provides a face recognition method.
The method can be applied to any scene needing continuous recognition of the face image. For example, in the driving process of a vehicle, a driver and passengers are continuously identified by using a human face image acquired by a vehicle-mounted camera; for example, a road traffic camera is used to continuously recognize a face image of a pedestrian.
In addition, the method can be applied to any type of electronic equipment, that is, the execution subject of the method can be any type of electronic equipment, such as a vehicle-mounted camera, a mobile phone, a notebook computer, a desktop computer and the like. For clarity, the electronic device is hereinafter referred to as an electronic device.
For an electronic device capable of directly acquiring a face image, for example, a vehicle-mounted camera and the like, the method provided by the embodiment of the present invention may be directly executed to continuously recognize the acquired face image, and the acquired face image may also be sent to another electronic device, for example, a desktop computer and the like, for executing the method provided by the embodiment of the present invention, so that the other electronic device executes the method provided by the embodiment of the present invention to continuously recognize the acquired face image; for an electronic device that cannot directly acquire a face image, for example, a desktop computer or the like, the electronic device may acquire the face image acquired by another electronic device that can directly acquire the face image, for example, a vehicle-mounted camera or the like, and further execute the method provided by the embodiment of the present invention to realize continuous recognition of the acquired face image. This is all reasonable.
Based on this, the embodiment of the present invention does not limit the application scenario and the execution subject of the face recognition method provided by the embodiment of the present invention.
The face recognition method provided by the embodiment of the invention can comprise the following steps:
acquiring a face image to be recognized, and calculating difference values between the face image to be recognized and each reference face image group; each image included in each reference face image group corresponds to the same person information;
judging whether a first image group exists according to the difference value, wherein the first image group is as follows: a reference face image group with the difference value with the face image to be recognized smaller than a first threshold value;
when a first image group exists, determining the identity of a person corresponding to the face image to be recognized based on the person information corresponding to the first image group;
when the first image group does not exist, judging whether a second image group exists according to the difference value; wherein the second image group is: the difference value between the reference face image group and the face image to be recognized is not smaller than the first threshold value and is smaller than the second threshold value; the first threshold value is smaller than the second threshold value;
if the second image group exists, judging whether the person corresponding to the face image to be recognized is a tracked person or not;
and if the face image is the tracked person, determining the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the second image group.
As can be seen from the above, in the scheme provided in the embodiment of the present invention, for a face image to be recognized, when a difference value between the face image to be recognized and each reference face image group is not less than a first threshold value, so that a recognition result of the face image to be recognized cannot be obtained through one-time determination, it can be further determined whether a second image group exists in each reference face image group, where the difference value between the second image group and the face image to be recognized is not less than the first threshold value and is less than a second threshold value. And then, when the judgment result is that the second image group exists, whether the person corresponding to the face image to be recognized is the tracked person can be further judged, so that when the person is judged to be the tracked person, the person corresponding to the face image to be recognized can be determined to be the person corresponding to the second image group, and therefore the identity of the person corresponding to the face image to be recognized can be determined based on the person information corresponding to the second image group, and the face image to be recognized can be successfully recognized.
The face image to be recognized is the tracked person, so that other collected face images of the person can be successfully recognized before the face image to be recognized. Therefore, in the process of recognizing the face image to be recognized, even if the difference value between the face image to be recognized and each reference face image group is not smaller than the first threshold value, when the second image group exists, it can be stated that the person corresponding to the face image to be recognized is the person corresponding to the tracked second image group, and the face image to be recognized can be the non-frontal face image of the corresponding person.
That is to say, when the person corresponding to the face image to be recognized is the tracked person, even if the face image to be recognized used for face recognition at a certain time is a non-frontal face image of the person, successful recognition of the person can still be achieved according to the dual limiting conditions of the first threshold and the second threshold. Therefore, in the continuous identification process of the person corresponding to the face image to be identified, the possibility that identification success and identification failure alternately occur due to the change of the position state of the person can be reduced. The above-mentioned manner for recognizing the face image to be recognized according to the dual limitation conditions of the first threshold and the second threshold may be referred to as a back-off threshold mechanism.
Based on this, in the scheme provided by the embodiment of the present invention, by using a rollback threshold mechanism and through the judgment of the second threshold, the successful recognition of the facial image to be recognized can be realized under the condition that the recognition result of the facial image to be recognized cannot be determined through the first threshold. Therefore, in the continuous identification process of the person corresponding to the face image to be identified, the possibility that identification success and identification failure alternately occur due to the change of the position state of the person can be reduced. Therefore, the robustness of face recognition can be improved, personalized response actions can be continuously provided for the personnel corresponding to the face image to be recognized, and the user experience of the personnel is improved.
A face recognition method provided in an embodiment of the present invention is specifically described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention, and as shown in fig. 1, the method may include the following steps:
s101: acquiring a face image to be recognized, and calculating difference values of the face image to be recognized and each reference face image group;
each image included in each reference face image group corresponds to the same person information;
it should be noted that, one reference face image group includes a plurality of face images corresponding to the same person, and when each reference face image group is initially set, a reference face image of the person may be designated in each reference face image group in advance, where the reference face image may be an image with higher definition and capable of representing the face features of the user more accurately and comprehensively, for example, a face image on an identity card of the person, or a face image on a driver's license of the person. Based on this, when the reference face image groups are initially set, each reference face image group may include only the reference face image of the person corresponding to the reference face image group.
Furthermore, in the face recognition process, for some recognized people, the collected face images for recognizing the people may also be capable of representing the face features of the user more accurately, and therefore, the collected face images may be added to the reference face image group corresponding to the people, so that the number of preset images in the reference face image group corresponding to the people is increased, and the reference face image group is more accurate as the matching standard of face recognition.
Moreover, each image in each reference face image group corresponds to the same person information, so that when the face image to be recognized is recognized, the difference value between the face image to be recognized and each reference face image group can be directly calculated for each reference face image group without calculating the difference value between the face image to be recognized and each reference face image.
Optionally, each reference face image group may be stored in a preset reference image library, so that when the reference image library is constructed and face recognition is not started yet, at least one face image of each person possibly recognized in the face recognition process is stored in the reference image library. The face images corresponding to the same person form a reference face image group. After the recognition is started, the collected face image can be added to a reference face image group corresponding to the face image in the reference image library.
Based on this, when the face image to be recognized is acquired, the difference value between the face image to be recognized and each reference face image group can be further calculated.
Optionally, the face image to be recognized may be a rectangular image region where the face position in the person image is located, which is obtained by performing face detection on the person image acquired by the image acquisition device.
The face detection may be implemented by a Deep Neural Network (DNN) model, for example, MTCNN (Multi-task Convolutional Neural Network), FisherFace face recognition algorithm, ssd (single short Multi detector) algorithm, YOLO algorithm, and the like. Moreover, a current more common face recognition deep neural network model is a Retina face recognition algorithm.
Furthermore, when the model is used for detecting the human face, the input of the model is the collected personnel image, and the output of the model after detection is the rectangular image area for representing the position of the human face in the personnel image. Wherein, aiming at each face area in the personnel image, a rectangular image area is output.
Moreover, when the model is used for detecting the face, the key points in the face can be detected and obtained in the rectangular image area where the face position in the output person image is located. The number of the key points may be 5 points or 68 points. For example, as shown in fig. 2(a), the number of detected key points is 5 points, including two eyes, a nose, and two mouth corners. As shown in fig. 2(b), the number of detected keypoints is 68 points, so that a more accurate position in the face can be included, wherein fig. 2(c) is the keypoint characterization of fig. 2 (b).
The difference value between the face image to be recognized and each reference face image group can represent the difference between the feature of the face image to be recognized and the feature of the face image in the reference face image group, so that the smaller the difference value is, the smaller the difference between the feature of the face image to be recognized and the feature of the face image in the reference face image group is, that is, the higher the similarity between the feature of the face image to be recognized and the feature of the face image in the reference face image group is, the higher the possibility that the person information corresponding to the face image to be recognized is the person information corresponding to the reference face image group is.
The calculation method of the difference values will be described later.
S102: judging whether a first image group exists according to the difference value; if the first image group exists, executing step S103; otherwise, executing step S104;
wherein the first image group is: and the difference value between the reference face image group and the face image to be recognized is smaller than a first threshold value.
After the difference value between the face image to be recognized and each preset reference face image group is obtained through calculation, whether the difference value between the face image to be recognized and each preset reference face image group is smaller than a first threshold value or not can be judged according to each reference face image group. When a reference face image group exists, and the difference value between the face image to be recognized and the reference face image group is smaller than a first threshold value, the reference face image group can be determined as a first image group. That is, when it is determined that the difference value between the face image to be recognized and a certain reference face image group is smaller than a first threshold value, it may be determined that a first image group exists, and the first image group is: and the difference value between the reference face image group and the face image to be recognized is smaller than a first threshold value.
S103: and obtaining the personnel identity corresponding to the face image to be recognized based on the personnel information corresponding to the first image group.
Because the difference value between the face image to be recognized and the first image group is smaller than the first threshold value, it can be stated that the person identity corresponding to the face image to be recognized is the person identity corresponding to the first image group, that is, it can be stated that the person corresponding to the face image to be recognized and the person corresponding to the first image group are the same person. At this time, it can be determined that the face of the face image to be recognized is successfully recognized, and further, the identity of the person corresponding to the face image to be recognized can be obtained based on the person information corresponding to the first image group.
Optionally, the person information corresponding to the first image group, that is, the person information of the reference face image group corresponding to the first image group, may at least include one of the following types of information: person name, person gender, person preference, person native place, person habit, etc. Of course, other types of information may be included.
Further, optionally, after obtaining the identity of the person corresponding to the face image to be recognized, the person information corresponding to the face image to be recognized may be further determined from the person information corresponding to the first image group.
The information of the person corresponding to the first image group can be used as the information of the person corresponding to the face image to be recognized, and part of the information of the person corresponding to the first image group can be used as the information of the person corresponding to the face image to be recognized. For example, when a driver and a passenger are subjected to face recognition, the preference and the riding habit of the passenger can be selected from the passenger information corresponding to the first image group as the passenger information corresponding to the face image to be recognized.
Further, optionally, after obtaining the person information corresponding to the face image to be recognized, personalized response operation may be further provided for the person corresponding to the person image to be recognized according to the obtained person information. For example, when a driver and a passenger perform face recognition, the seat back inclination angle of the seat where the person corresponding to the face image to be recognized is located can be adjusted according to the person preference and the person riding habit in the person information corresponding to the obtained face image to be recognized, and a radio station program is recommended for the person.
S104: judging whether a second image group exists according to the difference value; if the second image group exists, executing step S105;
wherein the second image group is: the difference value between the reference face image group and the face image to be recognized is not smaller than a first threshold value and is smaller than a second threshold value; the first threshold value is smaller than the second threshold value;
and when the difference value between the face image to be recognized and each reference face image group is judged to be not smaller than the first threshold value, the fact that the first image group does not exist in each reference face image group is indicated, namely, the reference face image group which is specially matched with the face image to be recognized is not considered. Furthermore, whether the difference value between the face image to be recognized and each reference face image group is smaller than a second threshold value or not can be further judged, wherein the first threshold value is smaller than the second threshold value. That is to say, whether a reference face image group with a difference value between a first threshold value and a second threshold value from the face image to be recognized exists can be further determined, that is, whether a reference face image group which is matched with the face image to be recognized exists is determined according to the second threshold value.
If the difference value between the reference face image group and the face image to be recognized is not smaller than the first threshold value, and the reference face image group smaller than the second threshold value is the second image group. Correspondingly, if the face image to be recognized does not exist, the recognition failure of the face image to be recognized can be determined, and the personnel information corresponding to the face image to be recognized cannot be determined.
S105: judging whether a person corresponding to the face image to be recognized is a tracked person or not; if the person is tracked, executing step S106;
when it is determined that the second image group exists, it can be stated that the person corresponding to the second image group may be the same person as the person corresponding to the face image to be recognized. Furthermore, in order to determine whether the person corresponding to the second image group is the same person as the person corresponding to the face image to be recognized, it may be further determined whether the person corresponding to the face image to be recognized is a tracked person.
That is to say, whether the image acquisition object of the facial image to be recognized is always in the acquired image acquired by the image acquisition device can be judged by judging whether the person corresponding to the facial image to be recognized is the tracked person, so as to determine whether the reason why the first image group does not exist is caused by the change of the position state of the tracked person.
The method and the device for identifying the face image of the tracked person can judge whether the person corresponding to the face image to be identified is the tracked person in various ways, and the embodiment of the invention is not particularly limited. In a specific implementation manner, whether a person corresponding to the face image to be recognized is a tracked person or not can be judged by using a preset tracking algorithm, that is, whether the person corresponding to the face image to be recognized is always in an acquired image acquired by the image equipment or not is judged. The preset Tracking algorithm may be a Simple Online and real Tracking algorithm (target Tracking algorithm) and a Deep Simple Online and real Tracking algorithm (multi-target Tracking algorithm), and of course, the preset Tracking algorithm may also be other Tracking algorithms, which are all reasonable.
The detected face images and the face images stored in advance can be compared one by one, when the similarity between the detected face images and a certain face image stored in advance is high, the fact that the person corresponding to the face images and the person corresponding to the face images stored in advance are the same person can be determined, and the fact that the person corresponding to the face images stored in advance is the tracked person can be determined.
S106: and obtaining the personnel identity corresponding to the face image to be recognized based on the personnel information corresponding to the second image group.
If the person corresponding to the face image to be recognized is judged to be the tracked person, it can be shown that the image acquisition object of the face image to be recognized is always located in the acquired image acquired by the image acquisition device, so that the reason that the first image group does not exist is determined to be caused by the change of the position state of the tracked person, and further, the person identity corresponding to the face image to be recognized is the person identity corresponding to the second image group.
Moreover, the reason why the difference value between the face image to be recognized and the second image group is not smaller than the first threshold value may be: due to the fact that the position state of the person corresponding to the face image to be recognized changes, the obtained face image to be recognized is a non-positive image of the person, and further the difference value between the face image to be recognized and the second image group is high, and the difference value caused by the situation is not higher than the second threshold value.
Therefore, because the person corresponding to the face image to be recognized is the tracked person, it can be stated that the person identity corresponding to the face image to be recognized is the person identity corresponding to the second image group, that is, it can be stated that the person corresponding to the face image to be recognized and the person corresponding to the second image group are the same person. At this time, it can be determined that the face of the face image to be recognized is successfully recognized, and further, the identity of the person corresponding to the face image to be recognized can be obtained based on the person information corresponding to the second image group.
Optionally, the person information corresponding to the second image group, that is, the person information of the reference face image group corresponding to the second image group, may at least include one of the following types of information: person name, person gender, person preference, person native place, person habit, etc. Of course, other types of information may be included.
Further, optionally, after obtaining the identity of the person corresponding to the face image to be recognized, the person information corresponding to the face image to be recognized may be further determined from the person information corresponding to the second image group.
The person information corresponding to the second image group can be used as the person information corresponding to the face image to be recognized, and part of the information in the person information corresponding to the second image group can be used as the person information corresponding to the face image to be recognized. For example, when a driver and a passenger are subjected to face recognition, the preference and the riding habit of the passenger can be selected from the passenger information corresponding to the second image group as the passenger information corresponding to the face image to be recognized.
Further, optionally, after obtaining the person information corresponding to the face image to be recognized, personalized response operation may be further provided for the person corresponding to the person image to be recognized according to the obtained person information. For example, when a driver and a passenger perform face recognition, the seat back inclination angle of the seat where the person corresponding to the face image to be recognized is located can be adjusted according to the person preference and the person riding habit in the person information corresponding to the obtained face image to be recognized, and a radio station program is recommended for the person.
Correspondingly, if the person corresponding to the face image to be recognized is judged not to be the tracked person, the failure of the recognition of the face image to be recognized can be determined, and the person information corresponding to the face image to be recognized cannot be determined.
As can be seen from the above, in the scheme provided in the embodiment of the present invention, by using a rollback threshold mechanism and by determining the second threshold, the successful recognition of the facial image to be recognized can be achieved under the condition that the recognition result of the facial image to be recognized cannot be determined by the first threshold. Therefore, in the continuous identification process of the person corresponding to the face image to be identified, the possibility that identification success and identification failure alternately occur due to the change of the position state of the person can be reduced. Therefore, the robustness of face recognition can be improved, personalized response actions can be continuously provided for the personnel corresponding to the face image to be recognized, and the user experience of the personnel is improved.
Next, a manner of calculating a difference value between the face image to be recognized and each reference face image group in the step S101 is described as an example.
Optionally, in a specific implementation manner, the step S101 may include the following steps 1011 and 1012.
Step 1011: respectively calculating the similarity between the face image to be recognized and each reference face image group;
in this step, for each reference face image group, the similarity between the face image to be recognized and the reference face image group may be calculated first, that is, the face image to be recognized and each reference face image group obtain a corresponding similarity. Therefore, a plurality of similarities between the face image to be recognized and each reference face image group can be calculated respectively.
Step 1012: and respectively calculating difference values of the face image to be recognized and each reference face image group based on the similarity.
In this step, for each reference face image group, after the similarity between the face image to be recognized and the reference face image group is calculated in step S1011, the difference value between the face image to be recognized and the reference face image group is determined based on the similarity; or after the similarity between the face image to be recognized and all the reference face image groups is calculated in step S1011, the difference value between the face image to be recognized and each of the reference face image groups is determined at the same time.
Optionally, the difference value between the face image to be recognized and the jth reference image group in each reference face image group may be used
Figure 112022DEST_PATH_IMAGE001
And (4) showing. Wherein the content of the first and second substances,
Figure 74162DEST_PATH_IMAGE002
and m is the group number of each reference face image group, namely m reference face image groups are preset.
In this step, the difference value between the face image to be recognized and each reference face image group may be: the difference between the preset value and the similarity, that is, the difference obtained by subtracting the similarity between the face image to be recognized and the reference face image group from the preset value, is generally 1.
In this specific implementation manner, the step 1011 may be executed in multiple ways to obtain the similarity between the face image to be recognized and each reference face image group, and thus, the embodiment of the present invention is not specifically limited.
Optionally, in a specific implementation manner, the step 1011 may include the following steps 1011A to 1011C.
Step 1011A: extracting a first feature vector of a face image to be recognized;
step 1011B: and respectively calculating second feature vectors of the reference face image groups, wherein the second feature vectors of the reference face image groups are acquired based on the third feature vectors of all face images in the reference face image groups for each reference face image group.
Specifically, for each reference image group, if the reference image group includes one face image, the second feature vector of the reference face image group is the third feature vector of the face image included in the reference face image group; and if the reference face image group comprises a plurality of face images, the second feature vector of the reference face image group is the average value of the third feature vectors of all the face images in the reference face image group.
Step 1011C: and respectively calculating the similarity between the first characteristic vector of the face image to be recognized and the second characteristic vector of each reference face image group as the similarity between the face image to be recognized and each group of reference face images.
When calculating the similarity between the face image to be recognized and each reference face image group, the similarity between the face image to be recognized and each reference face image group can be obtained by first extracting the first feature vector of the face image to be recognized according to step 1011A and calculating the second feature vector of each reference face image group according to step 1011B, and by calculating the similarity between the first feature vector and the second feature vector of each reference face image group. And calculating the similarity between the first feature vector of the face image to be recognized and the second feature vector of each reference face image group, namely the similarity between the face image to be recognized and each reference face image group.
It should be noted that, in order to obtain the similarity between the face image to be recognized and each reference face image group by calculating the similarity of the feature vectors, the first feature vectors and each second feature vector are vectors with the same dimension and the same numerical value type. Furthermore, since the second feature vector of each reference face image group is obtained based on the third feature vectors of all face images in the reference face image group, the dimensions and numerical types of the third feature vectors of all images in each reference face image group are the same as those of the first feature vector and each second feature vector.
Optionally, the first image vector of the face image to be recognized and the third feature vectors of all the face images in each reference face image group may be high-dimensional vectors extracted from the face image by using a feature extraction network. For example, the first feature vector of the facial image to be recognized and the third feature vectors of all facial images in each reference facial image group may be 512-dimensional floating point type vectors extracted from the facial image to be recognized and all facial images in each reference facial image group by using an insight face network.
Of course, the feature extraction network may also be other networks, such as FaceNet, spheerface, CosFace, and the like, and the dimensions and numerical types of the feature vectors extracted by using different feature extraction networks may be different.
Furthermore, the similarity between the first feature vector and each second feature vector of the face image to be recognized can be calculated in various ways. For example, the similarity between the first feature vector and each second feature vector, that is, the similarity between the face image to be recognized and each reference face image group, may be obtained by calculating the mean square error, the euclidean distance, and the like of the first feature vector and the second feature vector of each reference face image group.
Illustratively, for the first feature vector a and the second feature vector B, the mean square error thereof is calculated as:
Figure 702590DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 4258DEST_PATH_IMAGE004
is the mean square error of the first feature vector a and the second feature vector B, n is the dimension of the first feature vector a and the second feature vector B,
Figure 427149DEST_PATH_IMAGE005
for the ith element value in the first feature vector a,
Figure 814268DEST_PATH_IMAGE006
for the ith element value in the second feature vector B,
Figure 246386DEST_PATH_IMAGE007
illustratively, for the first feature vector a and the second feature vector B, the calculation formula of the euclidean distance is:
Figure 136982DEST_PATH_IMAGE008
wherein d is the Euclidean distance between the first feature vector A and the second feature vector B, n is the dimension of the first feature vector A and the second feature vector B,
Figure 996354DEST_PATH_IMAGE005
for the ith element value in the first feature vector a,
Figure 605189DEST_PATH_IMAGE006
for the ith element value in the second feature vector B,
Figure 840999DEST_PATH_IMAGE007
optionally, the similarity between the first feature vector and each second feature vector may be obtained by calculating a cosine value of an included angle between the first feature vector and each second feature vector. That is, the cosine value of the included angle between the first feature vector and each second feature vector may be calculated as the similarity between the first feature vector and each second feature vector.
For example, as shown in fig. 3, the first feature vector and a certain second feature vector are vectors a and B in fig. 3, respectively, and the cosine values of the included angle between the vectors a and B are:
Figure 851680DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 819636DEST_PATH_IMAGE010
is the cosine of the angle between vectors a and B,
Figure 978085DEST_PATH_IMAGE011
as the abscissa of the vector a in the two-dimensional image coordinate system,
Figure 689689DEST_PATH_IMAGE012
is the ordinate of the vector a in the two-dimensional image coordinate system,
Figure 617194DEST_PATH_IMAGE013
as the abscissa of the vector B in the two-dimensional image coordinate system,
Figure 756051DEST_PATH_IMAGE014
is the ordinate of the vector B in the two-dimensional image coordinate system.
Further, the cosine value of the included angle between the vectors a and B may also be:
Figure 401796DEST_PATH_IMAGE015
where n is the dimension of vectors A and B,
Figure 651512DEST_PATH_IMAGE005
for the ith element value in vector a,
Figure 636785DEST_PATH_IMAGE016
the value of the ith element in the vector B,
Figure 8861DEST_PATH_IMAGE007
when the first image group exists, it can be shown that the similarity between the face image to be recognized and the first image group is higher, so that, in order to expand the reference face image group to obtain a more accurate face recognition matching standard, when the first image group exists, the reference face image group corresponding to the first image group may be updated by using the face image to be recognized.
Based on this, optionally, in a specific implementation manner, the face recognition method provided in the embodiment of the present invention may further include the following steps 1-2.
Step 1: when the first image group exists, judging whether the difference value between the face image to be recognized and the first image group is smaller than a third threshold value; wherein the third threshold is smaller than the first threshold; if yes, executing step 2;
step 2: and updating the reference face image group corresponding to the first image group by using the face image to be recognized.
In this specific implementation manner, when the first image group is determined to exist, it may be further determined whether a difference value between the face image to be recognized and the first image group is smaller than a third threshold value.
Furthermore, since the third threshold is smaller than the first threshold, when the difference between the facial image to be recognized and the first image group is smaller than the third threshold, it can be said that the similarity between the facial image to be recognized and the first image group is higher, so that the first image group can be updated by using the facial image to be recognized, that is, the reference facial image group corresponding to the first pixel is updated. Thus, the face image to be recognized can become an image utilized in the subsequent face recognition process.
The second feature vector of each reference face image group is obtained based on the third feature vectors of all face images in the reference face image group, so that after the first image group is updated by using the face image to be recognized, the second feature vector of the updated first image group needs to be updated, that is, the second feature vector of the reference face image group corresponding to the updated first image group is updated.
The second feature vector of the updated first image group may be updated by first determining the third feature vectors of all the facial images in the updated first image group, and then calculating the second feature vector of the updated first image group based on the third feature vectors of all the facial images in the updated first image group. In this way, in the subsequent face recognition process, the second feature vector of the updated first image group is used. The second feature vector of the updated first image group may be an average value of the third feature vectors of all the face images in the updated first image group.
Further, in order to avoid the situation that the recognition accuracy is low due to the accumulation of errors, the number of images included in each reference face image group is not too large.
Based on this, optionally, in a specific implementation manner, in the step 2, updating the reference face image group corresponding to the first image group by using the face image to be recognized may include the following steps 21 to 22:
step 21: judging whether the number of the face images included in the reference face image group corresponding to the first image group reaches a preset number or not; if not, go to step 22;
step 22: and adding the face image to be recognized to the reference face image group corresponding to the first image group.
In this specific implementation manner, the reference face image group corresponding to the first image group may include a preset number of face images, and when the number of face images included in the reference face image group corresponding to the first image group does not reach the preset number, it is indicated that a new face image may be added to the reference face image group corresponding to the first image group without generating a serious accumulated error when the second feature vector is calculated, and the identity of a person corresponding to the added new face image is the same as the identity of a person corresponding to the first image group.
In this specific implementation manner, the reference face image group corresponding to the first image group may include a preset number of face images, and when the number of face images included in the reference face image group corresponding to the first image group does not reach the preset number, it is indicated that a new face image may be added to the reference face image group corresponding to the first image group, and the identity of a person corresponding to the added new face image is the same as the identity of a person corresponding to the first image group. After the face images to be recognized are added to the reference face image group corresponding to the first image group, the number of the face images included in the first image group is not more than the preset number.
The specific numerical values of the preset number may be determined based on an error accumulation rule and a requirement on accuracy of face recognition, and for this reason, the embodiment of the present invention does not limit the specific numerical values of the preset number, for example, the preset number may be 2 or 3.
Optionally, in another specific implementation manner, in the step 2, updating the reference face image group corresponding to the first image group by using the face image to be recognized may include the following steps 21 and steps 23 to 25:
step 21: judging whether the number of the face images included in the reference face image group corresponding to the first image group reaches a preset number or not; if yes, go to step 23;
step 23: respectively acquiring first difference values of each face image in a reference face image group corresponding to the first image group and the first image group, and acquiring second difference values of a face to be recognized and the first image group;
step 24: judging whether a face image with a first difference value larger than a second difference value exists in a reference face image group corresponding to the first image group; if yes, go to step 25;
step 25: and deleting the face image corresponding to the maximum first difference value in the reference face image group corresponding to the first image group, and adding the face image to be recognized to the reference face image group corresponding to the first image group.
In this specific implementation manner, the reference face image group corresponding to the first image group may include a preset number of face images, and when the number of face images included in the reference face image group corresponding to the first image group reaches the preset number, it indicates that, if the reference face image group corresponding to the first image group is updated by using the face images to be recognized, a certain face image in the reference face image group corresponding to the first image group needs to be replaced by the face image to be recognized, so as to ensure that the number of face images in the reference face image group corresponding to the updated first image group does not exceed the preset number.
Based on this, when it is determined that the number of face images included in the reference face image group corresponding to the first image group reaches the preset number, the face image to be recognized cannot be directly added to the reference face image group corresponding to the first image group, but a face image to be replaced needs to be determined in all face images in the reference face image group corresponding to the first image group first, so that the face image of the face image to be replaced in the reference face image group corresponding to the first image group is replaced with the face image to be recognized, that is, the face image to be replaced in the reference face image group corresponding to the first image group is deleted, and the face image to be recognized is added to the reference face image group corresponding to the deleted first image group.
In order to determine the face images to be replaced, when the number of the face images included in the reference face image group corresponding to the first image group is judged to reach the preset number, a first difference value between each face image in the reference face image group corresponding to the first image group and a second difference value between the face images to be recognized and the first image group can be obtained. Wherein, the second difference value between the face image to be recognized and the first image group is: in the step S101, when the difference value between the face image to be recognized and each reference face image group is calculated, the difference value between the face image to be recognized and the first image group is calculated.
Furthermore, the first difference value between each face image in the reference face image group corresponding to the first image group and the first image group can be determined by calculating the similarity between the third feature vector of each face image in the reference face image group corresponding to the first image group and the second feature vector of the first image group. Moreover, the determining process is similar to the above process of calculating the difference value between the face image to be recognized and each reference face image group, and is not repeated herein.
After the second difference value and each first difference value are obtained, the size relationship between the second difference value and each first difference value can be determined, so that whether a face image with the first difference value larger than the second difference value exists in the reference face image group corresponding to the first image group or not is judged. When a certain first difference value is greater than a second difference value, it can be stated that the difference between the face image corresponding to the first difference value and the first image group is greater than the difference between the face image to be recognized and the first image group, that is, the similarity between the face image to be recognized and the first image group is higher than the similarity between the face image corresponding to the first difference value and the first image group. Thus, the face image corresponding to the first difference value may be used as the face image to be replaced.
It can be understood that the larger the first difference value is, the lower the similarity between the image corresponding to the first difference value and the first image group is, so that when the face image of which the first difference value is larger than the second difference value exists in the first image group, the face image corresponding to the largest first difference value in the reference face image group corresponding to the first image group may be deleted, and the face image to be recognized is added to the reference face image group corresponding to the deleted first image group.
Obviously, when there are a plurality of face images with the first difference value larger than the second difference value, the largest first difference value is larger than the second difference value, so that the face image corresponding to the largest first difference value can be used as the face image to be replaced, and therefore, the face image to be recognized is used to replace the face image corresponding to the largest first difference value in the reference face image group corresponding to the first image group; when a face image with the first difference value larger than the second difference value exists, the face image to be recognized can be directly used for replacing the face image.
Generally, the face images in various certificates of people have high definition, and can represent images of the face features of the user more accurately and comprehensively, for example, the face images in an identity card, the face images in a driver's license, and the like.
Based on this, optionally, in a specific implementation manner, the first image group includes a reference face image group and a dynamic face image group, that is, the reference face image group corresponding to the first image group includes the reference face image group and the dynamic face image group, and the reference face image group includes at least one reference face image;
correspondingly, in this specific implementation manner, in step 2, updating the reference face image group corresponding to the first image group by using the face image to be recognized may include the following step 26:
step 26: and keeping the reference face image group, and updating the dynamic face image group by using the face image to be recognized.
In this specific implementation manner, the face images in the reference face image group may be face images that are specified and cannot be deleted when the first image group is constructed, and these face images may generally be images that have higher definition and can represent the face features of the user more accurately and comprehensively, for example, face images in an identity card, face images in a driver's license, and the like.
The face images in the dynamic face image group may be collected face images to be recognized, which are gradually added in the face recognition process, and the face images may be deleted and replaced, so that the face images are replaced with new face images.
Based on the above, when the reference face image group corresponding to the first image group is updated by the face image to be recognized, the dynamic face image group in the first image group is updated.
For example, when the number of face images included in the reference face image group corresponding to the first image group reaches a preset number, a first difference value between each face image in the dynamic face image group and the first image group may be obtained, whether a face image whose first difference value is greater than a second difference value between a face image to be recognized and the first image group exists in the dynamic face image group is determined, and when the determination result indicates that the face image exists, the face image corresponding to the largest first difference value in the dynamic face image group is deleted, and the face image to be recognized is added to the dynamic face image group. Obviously, the face image to be replaced by the face image to be recognized is the face image in the dynamic face image group, and in the subsequent face recognition process, the face image to be recognized may also be deleted and replaced.
It is understood that, in many cases, the face region in the acquired face image to be recognized may be a non-vertical region due to the influence of the position state of the person. And because the face image to be recognized is vertical, the non-vertical face region can affect the accuracy of face recognition, and in order to improve the accuracy of face recognition, when the face region in the face image to be recognized is a non-vertical region, the face image to be recognized can be aligned, and the aligned face image to be recognized is obtained. And the aligned face region in the face image to be recognized is a vertical region.
For example, as shown in fig. 4(a), if the face region in the face image to be recognized is a non-vertical region, the face image to be recognized is aligned with the face region in fig. 4(a), so as to obtain an aligned face image to be recognized as shown in fig. 4(b), where the face region in the aligned face image to be recognized as shown in fig. 4(b) is a vertical region.
Based on this, optionally, in a specific implementation manner, when the face region in the face image to be recognized is a non-vertical region, before the step of calculating the difference value between the face image to be recognized and each reference face image group in the step S101 is executed, the face recognition method provided in the embodiment of the present invention may further include the following step 7:
and step 3: and aligning the face images to be recognized.
Correspondingly, in this specific implementation manner, in the step S101, the step of calculating the difference value between the face image to be recognized and each reference face image group may include the following step 1010:
step 1010: and calculating difference values of the aligned face image to be recognized and each reference face image group.
The first image group in step S102 may be: a reference face image group with the difference value smaller than a first threshold value with the aligned face image to be recognized;
the second image group in step S104 may be: and the difference value between the reference face image group and the aligned face image to be recognized is not less than a first threshold value and is less than a second threshold value.
Optionally, in step 3, the alignment manner of the face image to be recognized may be: and calculating a transformation matrix according to the key points in the face area of the detected face image to be recognized, so that the non-vertical face image to be recognized is converted into a vertical face image to be recognized through the transformation matrix, namely the face image to be recognized is aligned through the transformation matrix.
Specifically, the method comprises the following steps: because the left and right cheeks, the eyes, and the left and right mouth corners of the mouth in the vertical face should be symmetrical, for example, the key points corresponding to the two eyes should be on the same horizontal line, that is, in the two-dimensional coordinate system corresponding to the face image to be recognized, the y coordinates of the key points corresponding to the two eyes are the same, that is, in the two-dimensional coordinate system corresponding to the face image to be recognized, the longitudinal axis coordinates of the key points corresponding to the two eyes are the same.
Based on the above rules, when detecting the key points in the face region of the face image to be recognized, the size of the obtained face region detection frame is fixed, so the relative positions of the detected key points can be fixed and predicted, and thus, the vertical face region can be obtained only by scaling, translating and rotating the face region in the face image to be recognized.
It should be noted that scaling, translation and rotation are performed on pixels in the face images to be recognized, and since each face image to be recognized can be understood as a two-dimensional matrix with coordinates, the following coordinates are used for transformation.
Wherein the scaling matrix is:
Figure 79585DEST_PATH_IMAGE017
wherein c is a scaling;
the translation matrix is:
Figure 195308DEST_PATH_IMAGE018
wherein, in the step (A),
Figure 35088DEST_PATH_IMAGE019
and
Figure 578065DEST_PATH_IMAGE020
respectively horizontal displacement of a transverse axis and vertical displacement of a longitudinal axis in a two-dimensional coordinate system corresponding to the face image to be recognized;
the rotation matrix is:
Figure 870506DEST_PATH_IMAGE021
wherein, in the step (A),
Figure 789921DEST_PATH_IMAGE022
is the angle to be rotated.
Further, based on the scaling matrix, the translation matrix and the rotation matrix, it is assumed that the finally obtained matrix of the key points in the vertical face region is
Figure 484207DEST_PATH_IMAGE023
The matrix of the key points in the detected non-vertical face region is
Figure 198085DEST_PATH_IMAGE024
And, assuming that,
Figure 243402DEST_PATH_IMAGE025
then the following results can be obtained:
Figure 638611DEST_PATH_IMAGE026
further, the above result can be modified as follows:
Figure 249721DEST_PATH_IMAGE027
further, the above a, b,
Figure 337763DEST_PATH_IMAGE019
And
Figure 667113DEST_PATH_IMAGE020
and (4) finishing.
Thus, the above-mentioned a, b,
Figure 866013DEST_PATH_IMAGE019
And
Figure 331629DEST_PATH_IMAGE020
the results obtained above can then be used:
Figure 590572DEST_PATH_IMAGE026
and converting the non-vertical face area in the face image to be recognized to obtain the vertical face area.
Optionally, in a specific implementation manner, for a tracked person, during the tracking process, each acquired face image of the person may be stored.
Based on this, optionally, in a specific implementation manner, the step S105 of determining whether the person corresponding to the face image to be recognized is the tracked person may include the following steps 1051 and 1055:
step 1051: determining a face image to be tracked from the third image group; the third image group is formed by face images acquired within a preset time range before the face images to be recognized are acquired;
step 1052: calculating a third difference value between the face image to be tracked and each reference face image group;
step 1053: judging whether a fourth image group exists according to the third difference value; if the fourth image group exists, execute step 1054; wherein the fourth image group is: a reference face image group with a third difference value with the face image to be tracked smaller than a first threshold value;
step 1054: judging whether the fourth image group and the second image group are the same reference face image group; if yes, go to step 1055;
step 1055: and determining the person corresponding to the face image to be recognized as the tracked person.
In this specific implementation manner, a third image group may be set to store the face images acquired within a preset time range before acquiring the face images to be recognized, so that when determining whether the person corresponding to the face images to be recognized is the tracked person, the face images to be tracked may be acquired from the third image group, and further a third difference value between the acquired face images to be tracked and each reference person image group is calculated; and according to each third difference value, determining a reference face image group of which the third difference value with the face image to be tracked is smaller than a first threshold value, namely determining a fourth image group.
Obviously, since the third difference value between the face image to be tracked and the fourth image group is smaller than the first threshold value, the identity of the person corresponding to the face image to be tracked is the same as the identity of the person corresponding to the fourth image group, so that if the fourth image group and the second image group are the same reference face image group, the person corresponding to the face image to be tracked is the tracked person, and therefore, the person corresponding to the second image group can be said to be tracked, and further, since the difference value between the face image to be identified and the second image group is smaller than the second threshold value, that is, the probability that the identity of the person corresponding to the face image to be identified is the same as the identity of the person corresponding to the second image group is higher, and therefore, the person corresponding to the face image to be identified can be said to be tracked.
Corresponding to the face recognition method provided by the embodiment of the invention, the embodiment of the invention also provides a face recognition device.
Fig. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present invention, and as shown in fig. 5, the apparatus may include the following modules:
a difference value calculating module 510, configured to obtain a face image to be recognized, and calculate a difference value between the face image to be recognized and each reference face image group; each image included in each reference face image group corresponds to the same person information;
a first image group determining module 520, configured to determine whether a first image group exists according to the difference value, if the first image group exists, trigger a first result determining module 530, and if the first image group does not exist, trigger a second image group determining module 540; wherein the first image group is: a reference face image group with the difference value with the face image to be recognized smaller than a first threshold value;
the first result determining module 530 is configured to determine, based on the person information corresponding to the first image group, a person identity corresponding to the face image to be recognized;
the second image group determining module 540 is configured to determine whether a second image group exists according to the difference value, and if the second image group exists, trigger the tracker determining module 550;
the tracker judging module 550 is configured to judge whether a person corresponding to the face image to be recognized is a tracked person; if the person is tracked, triggering a second result determination module 560;
the second result determining module 560 is configured to determine, based on the person information corresponding to the second image group, the person identity corresponding to the facial image to be recognized.
As can be seen from the above, in the scheme provided in the embodiment of the present invention, by using a rollback threshold mechanism and by determining the second threshold, the successful recognition of the facial image to be recognized can be achieved under the condition that the recognition result of the facial image to be recognized cannot be determined by the first threshold. Therefore, in the continuous identification process of the person corresponding to the face image to be identified, the possibility that identification success and identification failure alternately occur due to the change of the position state of the person can be reduced. Therefore, the robustness of face recognition can be improved, personalized response actions can be continuously provided for the personnel corresponding to the face image to be recognized, and the user experience of the personnel is improved.
Corresponding to the face recognition method provided by the above embodiment of the present invention, an embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 603, a memory 602 and a communication bus 604, wherein the processor 601, the communication interface 603 and the memory 602 complete mutual communication through the communication bus 604,
a memory 602 for storing a computer program;
the processor 601 is configured to implement the steps of any of the face recognition methods provided in the embodiments of the present invention when executing the program stored in the memory 602.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the face recognition methods provided in the embodiments of the present invention.
In another embodiment, the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to perform the steps of any of the face recognition methods provided in the above embodiments of the present invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, apparatus embodiments, electronic device embodiments, computer-readable storage medium embodiments, and computer program product embodiments are described with relative simplicity as they are substantially similar to method embodiments, where relevant only as described in portions of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. A face recognition method, comprising:
acquiring a face image to be recognized, and calculating difference values between the face image to be recognized and each reference face image group; each image included in each reference face image group corresponds to the same person information;
judging whether a first image group exists according to the difference value, wherein the first image group is as follows: a reference face image group with the difference value with the face image to be recognized smaller than a first threshold value;
when a first image group exists, determining the identity of a person corresponding to the face image to be recognized based on the person information corresponding to the first image group;
when the first image group does not exist, judging whether a second image group exists according to the difference value; wherein the second image group is: the difference value between the reference face image group and the face image to be recognized is not smaller than the first threshold value and is smaller than the second threshold value; the first threshold value is smaller than the second threshold value;
if the second image group exists, judging whether the person corresponding to the face image to be recognized is a tracked person or not;
if the face image is the tracked person, determining the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the second image group;
the step of judging whether the person corresponding to the face image to be recognized is a tracked person comprises the following steps:
determining a face image to be tracked from the third image group; the third image group is formed by face images acquired within a preset time range before the face images to be recognized are acquired;
calculating a third difference value between the face image to be tracked and each reference face image group;
judging whether a fourth image group exists according to the third difference value, wherein the fourth image group is as follows: the difference value between the reference face image group and the face image to be tracked is smaller than the first threshold value;
if the fourth image group exists, judging whether the fourth image group and the second image group are the same reference human face image group;
and if so, determining that the person corresponding to the face image to be recognized is the tracked person.
2. The method according to claim 1, wherein the step of calculating the difference value between the facial image to be recognized and each set of reference facial images comprises:
respectively calculating the similarity between the face image to be recognized and each reference face image group;
and respectively calculating difference values of the face image to be recognized and each reference face image group based on the similarity.
3. The method according to claim 2, wherein the step of calculating the similarity between the face image to be recognized and each reference face image group comprises:
extracting a first feature vector of the facial image to be recognized;
respectively calculating a second feature vector of each reference face image group; aiming at each reference face image group, the second feature vector of the reference face image group is obtained based on the third feature vectors of all face images in the reference face image group;
and respectively calculating the similarity between the first characteristic vector of the face image to be recognized and the second characteristic vector of each reference face image group as the similarity between the face image to be recognized and each reference face image group.
4. The method of claim 3, further comprising:
when the first image group exists, judging whether the difference value between the face image to be recognized and the first image group is smaller than a third threshold value; wherein the third threshold is less than the first threshold;
and if so, updating the reference face image group corresponding to the first image group by using the face image to be recognized.
5. The method according to claim 4, wherein the step of updating the reference facial image group corresponding to the first image group by using the facial image to be recognized comprises:
judging whether the number of the face images included in the reference face image group corresponding to the first image group reaches a preset number or not;
and if not, adding the face image to be recognized to the reference face image group corresponding to the first image group.
6. The method according to claim 4, wherein the step of updating the reference facial image group corresponding to the first image group by using the facial image to be recognized comprises:
judging whether the number of the face images included in the reference face image group corresponding to the first image group reaches a preset number or not;
if so, respectively acquiring first difference values of each face image in a reference face image group corresponding to the first image group and the first image group, and acquiring second difference values of the face to be recognized and the first image group;
judging whether a face image with the first difference value larger than the second difference value exists in a reference face image group corresponding to the first image group;
and if so, deleting the face image corresponding to the largest first difference value in the reference face image group corresponding to the first image group, and adding the face image to be recognized to the reference face image group corresponding to the first image group.
7. The method according to claim 4, wherein the first image group comprises a reference face image group and a dynamic face image group, wherein the reference face image group comprises at least one reference face image; the step of updating the reference face image group corresponding to the first image group by using the face image to be recognized comprises the following steps:
and keeping the reference face image group, and updating the dynamic face image group by using the face image to be recognized.
8. An apparatus for face recognition, the apparatus comprising:
the difference value calculation module is used for acquiring a face image to be recognized and calculating the difference value between the face image to be recognized and each reference face image group; each image included in each reference face image group corresponds to the same person information;
the first image group judging module is used for judging whether a first image group exists according to the difference value, if so, the first result determining module is triggered, otherwise, the second image group judging module is triggered; wherein the first image group is: a reference face image group with the difference value with the face image to be recognized smaller than a first threshold value;
the first result determining module is used for determining the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the first image group;
the second image group judging module is used for judging whether a second image group exists according to the difference value, and if the second image group exists, the tracker judging module is triggered;
the tracker judging module is used for judging whether the person corresponding to the face image to be identified is a tracked person; if the person is tracked, triggering a second result determining module;
the second result determining module is used for determining the identity of the person corresponding to the face image to be recognized based on the person information corresponding to the second image group;
the tracker judgment module is specifically configured to: determining a face image to be tracked from the third image group; the third image group is formed by face images acquired within a preset time range before the face images to be recognized are acquired; calculating a third difference value between the face image to be tracked and each reference face image group; judging whether a fourth image group exists according to the third difference value, wherein the fourth image group is as follows: the difference value between the reference face image group and the face image to be tracked is smaller than the first threshold value; if the fourth image group exists, judging whether the fourth image group and the second image group are the same reference human face image group; and if so, determining that the person corresponding to the face image to be recognized is the tracked person.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
CN202011617404.7A 2020-12-31 2020-12-31 Face recognition method and device and electronic equipment Active CN112287918B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011617404.7A CN112287918B (en) 2020-12-31 2020-12-31 Face recognition method and device and electronic equipment
PCT/CN2021/113209 WO2022142375A1 (en) 2020-12-31 2021-08-18 Face recognition method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011617404.7A CN112287918B (en) 2020-12-31 2020-12-31 Face recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112287918A CN112287918A (en) 2021-01-29
CN112287918B true CN112287918B (en) 2021-03-19

Family

ID=74425127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011617404.7A Active CN112287918B (en) 2020-12-31 2020-12-31 Face recognition method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN112287918B (en)
WO (1) WO2022142375A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287918B (en) * 2020-12-31 2021-03-19 湖北亿咖通科技有限公司 Face recognition method and device and electronic equipment
CN112818901B (en) * 2021-02-22 2023-04-07 成都睿码科技有限责任公司 Wearing mask face recognition method based on eye attention mechanism
CN113459975B (en) * 2021-07-31 2022-10-04 重庆长安新能源汽车科技有限公司 Intelligent cabin system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1649408B1 (en) * 2003-06-30 2012-01-04 Honda Motor Co., Ltd. Systems and methods for training component-based object identification systems
US7978936B1 (en) * 2006-01-26 2011-07-12 Adobe Systems Incorporated Indicating a correspondence between an image and an object
US8254646B2 (en) * 2010-01-25 2012-08-28 Apple Inc. Image preprocessing
CN101964064B (en) * 2010-07-27 2013-06-19 上海摩比源软件技术有限公司 Human face comparison method
KR20170000748A (en) * 2015-06-24 2017-01-03 삼성전자주식회사 Method and apparatus for face recognition
CN106446754A (en) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 Image identification method, metric learning method, image source identification method and devices
CN107609383B (en) * 2017-10-26 2021-01-26 奥比中光科技集团股份有限公司 3D face identity authentication method and device
CN108614894B (en) * 2018-05-10 2021-07-02 西南交通大学 Face recognition database construction method based on maximum spanning tree
CN109902644A (en) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 Face identification method, device, equipment and computer-readable medium
CN110728234A (en) * 2019-10-12 2020-01-24 爱驰汽车有限公司 Driver face recognition method, system, device and medium
CN111898413A (en) * 2020-06-16 2020-11-06 深圳市雄帝科技股份有限公司 Face recognition method, face recognition device, electronic equipment and medium
CN112287918B (en) * 2020-12-31 2021-03-19 湖北亿咖通科技有限公司 Face recognition method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于Adaboost 的人脸图像检测系统》;杨才广;《智能计算机与应用》;20190930;第9卷(第5期);第300-302页; *

Also Published As

Publication number Publication date
CN112287918A (en) 2021-01-29
WO2022142375A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN112287918B (en) Face recognition method and device and electronic equipment
CN108090456B (en) Training method for recognizing lane line model, and lane line recognition method and device
US10163022B1 (en) Method for learning text recognition, method for recognizing text using the same, and apparatus for learning text recognition, apparatus for recognizing text using the same
CN108388879B (en) Target detection method, device and storage medium
US11321945B2 (en) Video blocking region selection method and apparatus, electronic device, and system
WO2019051941A1 (en) Method, apparatus and device for identifying vehicle type, and computer-readable storage medium
CN110706261A (en) Vehicle violation detection method and device, computer equipment and storage medium
CN110807491A (en) License plate image definition model training method, definition detection method and device
CN112001932B (en) Face recognition method, device, computer equipment and storage medium
Molina-Moreno et al. Efficient scale-adaptive license plate detection system
US10423817B2 (en) Latent fingerprint ridge flow map improvement
WO2021031704A1 (en) Object tracking method and apparatus, computer device, and storage medium
CN113420682A (en) Target detection method and device in vehicle-road cooperation and road side equipment
US11250269B2 (en) Recognition method and apparatus for false detection of an abandoned object and image processing device
CN111814612A (en) Target face detection method and related device thereof
Sun et al. Vehicle Type Recognition Combining Global and Local Features via Two‐Stage Classification
CN111274965A (en) Face recognition method and device, computer equipment and storage medium
CN112686129B (en) Face recognition system and method
CN112163110A (en) Image classification method and device, electronic equipment and computer-readable storage medium
JPWO2009110410A1 (en) Image collation device, image collation feature storage medium, image collation method, and image collation program
CN116258748A (en) Track tracking method
US20240127567A1 (en) Detection-frame position-accuracy improving system and detection-frame position correction method
CN113313034B (en) Face recognition method and device, electronic equipment and storage medium
CN113642450A (en) Video face recognition method, system and storage medium
KR20230030996A (en) Object tracking apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220330

Address after: 430051 No. b1336, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee after: Yikatong (Hubei) Technology Co.,Ltd.

Address before: No.c101, chuanggu start up zone, taizihu cultural Digital Industrial Park, No.18 Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee before: HUBEI ECARX TECHNOLOGY Co.,Ltd.