CN117373082A - Face recognition method, device, equipment and storage medium - Google Patents

Face recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN117373082A
CN117373082A CN202311317570.9A CN202311317570A CN117373082A CN 117373082 A CN117373082 A CN 117373082A CN 202311317570 A CN202311317570 A CN 202311317570A CN 117373082 A CN117373082 A CN 117373082A
Authority
CN
China
Prior art keywords
face
current
face image
preset
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311317570.9A
Other languages
Chinese (zh)
Inventor
陶训强
陈子予
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinli Intelligent Technology Jiangsu Co ltd
Original Assignee
Xinli Intelligent Technology Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinli Intelligent Technology Jiangsu Co ltd filed Critical Xinli Intelligent Technology Jiangsu Co ltd
Priority to CN202311317570.9A priority Critical patent/CN117373082A/en
Publication of CN117373082A publication Critical patent/CN117373082A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the invention discloses a face recognition method, a device, equipment and a storage medium, which comprise the following steps: acquiring a detection result of face recognition on a current face image through a preset face database; when the detection result is determined that the current face image is matched with the first target face data in the preset face database, determining a last face image successfully matched with the first target face data; acquiring a current matching score and a previous matching score when the previous face image and the current face image are respectively matched with first target face data; and when the current matching score and the last matching score meet the preset difference condition, updating first target face data in a preset face database according to the current face image. According to the technical scheme, when the matching score meets the preset difference condition, the first target face data is updated according to the current face image, so that the face data in the preset face database can be dynamically updated, and the face recognition accuracy is improved.

Description

Face recognition method, device, equipment and storage medium
Technical Field
The present invention relates to the field of face recognition technologies, and in particular, to a face recognition method, apparatus, device, and storage medium.
Background
With the continuous progress of society and urgent requirements for rapid and effective automatic authentication, biometric identification technology has been rapidly developed. The security and convenience of the technology are far higher than those of the traditional modes such as passwords, passwords or ID cards. At present, human body characteristics which can be used for identity recognition comprise physiological characteristics such as fingerprints, irises, faces, palmprints, veins and the like, and behavioral characteristics such as gait, handwriting, sound and the like. Compared with other recognition methods, the face recognition method has the characteristics of non-contact, no perception, abundant hardware equipment, high convenience and the like, and is deeply researched and widely applied.
In the prior art, face recognition is generally performed on a current face image according to face data in a preset face database.
However, due to slow changes in illumination, pose, beard or age, the current face image is not matched with the face data in the preset face database, and thus the face recognition of the user fails.
Disclosure of Invention
The invention provides a face recognition method, a device, equipment and a storage medium, which can realize dynamic updating of face data in a preset face database and improve the accuracy of face recognition.
According to an aspect of the present invention, there is provided a face recognition method, the method comprising:
acquiring a detection result of face recognition on a current face image through a preset face database;
when the detection result is determined that the current face image is matched with the first target face data in the preset face database, determining a last face image successfully matched with the first target face data;
acquiring a current matching score and a previous matching score when the previous face image and the current face image are respectively matched with first target face data;
and when the current matching score and the last matching score meet the preset difference condition, updating first target face data in a preset face database according to the current face image.
Optionally, updating the first target face data in the preset face database according to the current face image includes: acquiring current face characteristics corresponding to a current face image; and updating the first target face data in the preset face database according to the preset updating proportion, the current face characteristics and the first target face data.
Optionally, the method further comprises: when the detection result is that the current face image is not matched with all face data in the preset face database, detecting whether a shielding object exists in the current face image; when a shielding object exists in the current face image, carrying out shielding object removal prompt; and acquiring the face image with the occlusion object removed, and carrying out face recognition again according to the face image with the occlusion object removed and a preset face database.
Optionally, after detecting whether the occlusion exists in the current face image, the method further includes: when a shielding object exists in the current face image, carrying out attribute identification on the shielding object, and determining the attribute category of the shielding object; performing a occlusion removal cue, comprising: and carrying out shelter removal prompt according to the attribute category.
Optionally, the method further comprises:
when the detection result is that the matching of the current face image and all face data in the preset face database is failed, detecting whether a shielding object exists in the current face image; when a shielding object exists in the current face image, determining the position information of the shielding object in the current face image; acquiring a mapping relation between a face position and face data, and masking the face data in a preset face database according to the position information and the mapping relation; and carrying out face recognition again on the current face image according to the mask result.
Optionally, after the face recognition is performed again, the method further includes: and when the face recognition is performed again, if the current face image is matched with the second target face data in the preset face database, updating the second target face data according to the current face image.
Optionally, updating the second target face data according to the current face image includes: acquiring current face characteristics corresponding to a current face image; updating second target face data in a preset face database according to the preset updating proportion, the current face characteristics and the second target face data; or generating a face database with the obstructers according to the current face characteristics so as to carry out face recognition on the face image with the obstructers according to the face database with the obstructers.
According to another aspect of the present invention, there is provided a face recognition apparatus, comprising:
the detection result acquisition module is used for acquiring a detection result of face recognition on the current face image through a preset face database;
the upper face image determining module is used for determining an upper face image successfully matched with the first target face data when the detection result is that the current face image is matched with the first target face data in the preset face database;
the matching score acquisition module is used for acquiring the current matching score and the last matching score when the last face image and the current face image are respectively matched with the first target face data;
And the first target face data updating module is used for updating the first target face data in the preset face database according to the current face image when the current matching score and the last matching score meet the preset difference condition.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the face recognition method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to perform the face recognition method of any one of the embodiments of the present invention.
According to the technical scheme, the detection result of face recognition on the current face image through the preset face database is obtained; when the detection result is determined that the current face image is matched with the first target face data in the preset face database, determining a last face image successfully matched with the first target face data; acquiring a current matching score and a previous matching score when the previous face image and the current face image are respectively matched with first target face data; when the current matching score and the last matching score meet the preset difference condition, the technical means for updating the first target face data in the preset face database according to the current face image solves the problem that the current face image is not matched with the first target face data in the preset face database due to slow changes of illumination, gesture, beard or age, so that the face data in the preset face database can be dynamically updated, and the face recognition accuracy is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a face recognition method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a face registration method according to an embodiment of the present invention;
fig. 3 is a flowchart of a preferred face recognition method according to an embodiment of the present invention;
fig. 4 is a flowchart of another face recognition method according to the second embodiment of the present invention;
fig. 5 is a flowchart of another face recognition method according to the third embodiment of the present invention;
fig. 6 is a flowchart of a method for updating target face data according to an embodiment of the present invention;
Fig. 7 is a schematic structural diagram of a face recognition device according to a fourth embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device implementing a face recognition method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a face recognition method according to a first embodiment of the present invention, where the method may be applied to recognizing a face according to a preset face database, and the method may be performed by a face recognition device, which may be implemented in hardware and/or software, and the face recognition device may be configured in an electronic device, such as a computer. As shown in fig. 1, the method includes:
step 110, obtaining a detection result of face recognition on the current face image through a preset face database.
In this embodiment, the preset face database may be used to store face data. The detection result may be that the current face image matches face data in the preset face database or that the current face image does not match face data in the preset face database.
In this step, optionally, the detection result of face recognition may be determined according to the current matching score of the current face image and each face data in the preset face database and the preset score threshold.
In a specific embodiment, feature extraction can be performed on the current face image through a face feature extraction algorithm to obtain the current face feature. Then, the detection result can be determined according to the current matching score of the current face feature and each face feature in the preset face database and the preset score threshold. The predetermined score threshold may be any value between 0 and 1 (e.g., 0.6) determined experimentally.
For example, first, a Euclidean distance or cosine distance between the current face feature and the face feature of the preset face database may be determined. The smaller the Euclidean distance is, the higher the similarity of the face features is; the larger the cosine distance is, the smaller the cosine distance included angle is, and the higher the similarity of the face features is. Then, normalization processing may be performed on the euclidean distance and the cosine distance (such as mapping the cosine distance to between [0,1 ]), and the result of the normalization processing may be used as the current matching score. Finally, it may be determined whether the current matching score is greater than a preset score threshold. If yes, the detection result is that the current face image is matched with face data in a preset face database. If not, the detection result is that the current face image is not matched with the face data in the preset face database.
Optionally, face registration may be performed before face recognition, so that face data for matching the current face image exists in a preset face database.
Fig. 2 is a flowchart of a face registration method according to an embodiment of the present invention.
As shown in fig. 2, a camera may be used to collect a current face image, and perform face detection on the current face image to obtain a face region, and then, face key point detection may be performed on the detected face region. Then, face alignment can be performed, and the current face features are extracted after face alignment. Finally, the extracted face features can be stored in a preset face database.
Specifically, in step 1, a current face image can be acquired through a camera, and scaling and normalization processing are performed on the current face image. And 2, positioning a face region, namely positioning the position and the size of the face, in the processed current face image through a face detection algorithm. The face detection algorithm may be a multitasking convolutional neural network (Multi-task Convolutional Neural Network, MTCNN). And 3, determining position information of key areas (such as eyebrows, glasses, nose, mouth, facial contours and the like) of the face of the person in the face area by adopting a deep learning method. The deep learning method may be a deep convolutional neural network (Dynamic Convolution Neural, DCNN).
In step 4, different face images corresponding to the same user may show different differences, such as gestures and expressions, so as to facilitate the subsequent face feature extraction, a face alignment method is provided, and each face image is transformed to a uniform angle or gesture. For example, the current face image may be similarly transformed (e.g., rotated, scaled, and translated) according to the face key point detection result until the current face image meets the requirements of the standard face template, so as to obtain an aligned current face image. And 5, extracting compact and discriminative face feature vectors from the aligned current face images by adopting a deep learning method. The length of the face feature vector may be fixed, such as 128 dimensions.
And 120, when the detection result is that the current face image is matched with the first target face data in the preset face database, determining the last face image successfully matched with the first target face data.
In this embodiment, the first target face data may be face data that can be successfully matched with the current face image, and the first target face data and the current face image are from the same user.
In this step, optionally, the first target face data may be obtained according to a preset face database. Then, the method comprises the steps of. The last face image can be determined according to the first target face data, the successful matching identification and the matching time.
And 130, acquiring a current matching score and a last matching score when the last face image and the current face image are respectively matched with the first target face data.
In this embodiment, the current matching score may be used to represent the matching degree of the current face image and the first target face data. The last matching score may be used to represent a degree of matching of the last face image with the first target face data. Alternatively, the higher the matching score, the higher the degree of matching.
In this step, optionally, the last matching score may be obtained according to a history score record stored in a preset face database.
And 140, updating first target face data in a preset face database according to the current face image when the current matching score and the last matching score meet a preset difference condition.
In this embodiment, the preset difference condition may be set according to a user requirement, for example, the current matching score is smaller than the previous matching score of the preset multiple.
In this step, specifically, the first target face data may be updated according to the face feature of the current face image. Optionally, since the face features with high confidence do not affect the accuracy of face recognition, the technical solution of this embodiment does not update the first target face data according to the face features with high confidence.
For example, assuming that the current matching score is 0.8 and the previous matching score is 1, the preset difference condition is that the current matching score is less than 0.9 times the previous matching score, and the current matching score and the previous matching score satisfy the preset difference condition. At this time, it is considered that the accuracy of face recognition of the same user using the first target face data is degraded.
The advantage of setting like this is that because the slow variation of illumination, gesture, beard or age probably leads to the accuracy decline of face identification, therefore the technical scheme of this embodiment updates first target face data according to current face image, can realize improving the accuracy of face identification when not reducing the whole recognition accuracy of face.
According to the technical scheme, the detection result of face recognition on the current face image through the preset face database is obtained; when the detection result is determined that the current face image is matched with the first target face data in the preset face database, determining a last face image successfully matched with the first target face data; acquiring a current matching score and a previous matching score when the previous face image and the current face image are respectively matched with first target face data; when the current matching score and the last matching score meet the preset difference condition, the technical means for updating the first target face data in the preset face database according to the current face image solves the problem that the current face image and the first target face data in the preset face database are not matched due to slow changes of illumination, gesture, beard or age. When the current matching score and the last matching score meet the preset difference condition, the first target face data is updated according to the current face image, so that the face data in the preset face database can be dynamically updated, and the face recognition accuracy is improved.
Fig. 3 is a flowchart of a preferred face recognition method according to an embodiment of the present invention.
In a preferred implementation manner of the embodiment of the present invention, as shown in fig. 3, a camera may be used to collect a current face image, and perform face detection on the current face image to obtain a face area, and then face key point detection may be performed on the detected face area. Then, face alignment can be performed, and the current face features are extracted after face alignment. Finally, the current face image can be matched with each face data in the preset face image, and the current matching score is obtained. And judging whether the current matching score is larger than a preset score threshold value. If yes, prompting that the face recognition is successful, and recording the current matching score. If not, carrying out face recognition failure prompt.
Example two
Fig. 4 is a flowchart of another face recognition method according to the second embodiment of the present invention, where the technical solution is further refined, and the technical solution in this embodiment may be combined with each of the alternatives in the one or more embodiments. As shown in fig. 4, the method includes:
step 210, obtaining a detection result of face recognition on the current face image through a preset face database.
Step 220, when the detection result is that the current face image is matched with the first target face data in the preset face database, determining the last face image successfully matched with the first target face data.
Step 230, obtaining a current matching score and a last matching score when the last face image and the current face image are respectively matched with the first target face data.
And 240, when the current matching score and the last matching score meet a preset difference condition, acquiring the current face characteristics corresponding to the current face image.
In this embodiment, optionally, a face image feature extraction method may be used to extract the current face feature from the current face image. The face image feature extraction method may include a square gradient histogram (Histogram of Oriented Gradient, HOG) and a convolutional neural network.
Step 250, updating the first target face data in the preset face database according to the preset updating proportion, the current face characteristics and the first target face data.
In this step, specifically, the following calculation formula may be used to update the first target face data:
f new =f×a+f T
wherein f new For the face features corresponding to the updated first target face data, f is the current face feature, a is a preset updating proportion, f T And the face characteristics corresponding to the first target face data.
The advantage of this arrangement is that compared with the prior art that the current face features are directly added to the preset face database, the technical scheme of this embodiment updates the first target face data according to the current face features of the preset updating proportion, so as to reduce the occupation of the storage space in the preset face database and improve the face recognition speed. Secondly, for some changeable face features (such as beards), the face features can be slowly updated to the first target face data by setting smaller preset updating proportion, so that the updated first target face data can identify the current face image and the historical face image, and the accuracy of face recognition is improved.
Step 260, when the detection result is that the current face image is not matched with all face data in the preset face database, detecting whether a shielding object exists in the current face image.
In this step, optionally, the information about whether the key point is blocked may be output by a face detection algorithm while the key point detection result is output. Alternatively, key points of the face region of the current face image may be detected by a face detection algorithm. And then detecting whether a shielding object exists in the current face image according to the key points and a preset shielding threshold value. Specifically, if a key point with a preset shielding threshold is shielded in a key area of the face (such as eyebrows, glasses, nose, mouth, face outline, etc.), it can be considered that the key area of the face is shielded, that is, whether a shielding object exists in the current face image.
For example, assuming that the position of the key area of the face is the mouth and the preset shielding threshold is 50%, if 60% of key points on the mouth are shielded, the mouth can be considered to be shielded.
In an optional implementation manner of the embodiment of the present invention, after detecting whether an occlusion exists in the current face image, the method further includes: when a shielding object exists in the current face image, carrying out attribute identification on the shielding object, and determining the attribute category of the shielding object.
In this embodiment, the attribute categories include, but are not limited to, mask, sunglasses, hair, and the like.
Step 270, when a shielding object exists in the current face image, a shielding object removing prompt is performed.
Wherein, carry out shelter from thing removal suggestion, include: and carrying out shelter removal prompt according to the attribute category.
For example, if the attribute type is a mask, the mouth of the current face image may be considered to be blocked, and at this time, the user may be prompted to remove the mouth blocking. Or if the attribute type is a sunglasses, the glasses of the current face image can be considered to be blocked, and at the moment, the user can be prompted to remove the eye blocking.
Step 280, obtaining a face image with the obstructer removed, and carrying out face recognition again according to the face image with the obstructer removed and a preset face database.
Specifically, the face image from which the shielding object is removed may be matched with each face data in a preset face database.
And 290, when the face recognition is performed again, if the current face image is matched with the second target face data in the preset face database, updating the second target face data according to the current face image.
In this embodiment, the second target face data may be face data successfully matched with the current face image when face recognition is performed again.
In this step, specifically, first, the current face feature corresponding to the current face image is obtained. And then, updating second target face data in a preset face database according to the current face characteristics.
According to the technical scheme, when the current matching score and the last matching score meet the preset difference condition, the current face feature corresponding to the current face image is obtained; updating first target face data in a preset face database according to a preset updating proportion, current face characteristics and first target face data; when the detection result is that the current face image is not matched with all face data in the preset face database, detecting whether a shielding object exists in the current face image; when a shielding object exists in the current face image, carrying out shielding object removal prompt; the method comprises the steps of obtaining the face image without the shielding object, and carrying out face recognition again according to the face image without the shielding object and the preset face database, so that the problem that the current face image is not matched with the first target face data in the preset face database due to the changes of illumination, gesture, beards, wearing masks, wearing sunglasses or age is solved. By presetting the updating proportion, the current face characteristics and the first target face data and updating the first target face data, the face characteristics can be slowly updated to the first target face data, the current face characteristics are prevented from being directly stored in a preset face database, occupation of storage space is reduced, and face recognition efficiency is improved. Secondly, whether the occlusion object exists in the current face image is detected, and a user is prompted to remove the occlusion object, so that the situation that the face identification fails due to the existence of the occlusion object in the current face image is avoided, and the accuracy of the face identification is improved.
Example III
Fig. 5 is a flowchart of another face recognition method according to the third embodiment of the present invention, where the technical solution is further refined, and the technical solution in this embodiment may be combined with each of the alternatives in one or more embodiments. As shown in fig. 5, the method includes:
step 310, obtaining a detection result of face recognition on the current face image through a preset face database.
Step 320, when the detection result is that the current face image is matched with the first target face data in the preset face database, determining the last face image successfully matched with the first target face data.
Step 330, obtaining the current matching score and the previous matching score when the previous face image and the current face image are respectively matched with the first target face data; and when the current matching score and the last matching score meet the preset difference condition, updating first target face data in a preset face database according to the current face image.
In an optional implementation manner of the embodiment of the present invention, updating the first target face data in the preset face database according to the current face image includes: acquiring current face characteristics corresponding to a current face image; and updating the first target face data in the preset face database according to the preset updating proportion, the current face characteristics and the first target face data.
Step 340, detecting whether a shielding object exists in the current face image when the detection result is that the matching of the current face image and all face data in the preset face database is failed.
In an alternative implementation of the embodiment of the present invention, the method further includes: when the detection result is that the current face image is not matched with all face data in the preset face database, detecting whether a shielding object exists in the current face image; when a shielding object exists in the current face image, carrying out shielding object removal prompt; and acquiring the face image with the occlusion object removed, and carrying out face recognition again according to the face image with the occlusion object removed and a preset face database.
Optionally, after detecting whether the occlusion exists in the current face image, the method further includes: when a shielding object exists in the current face image, carrying out attribute identification on the shielding object, and determining the attribute category of the shielding object; performing a occlusion removal cue, comprising: and carrying out shelter removal prompt according to the attribute category.
And 350, when the occlusion exists in the current face image, determining the position information of the occlusion in the current face image.
At this step, optionally, a face alignment method (Supvised Descent Method, SDM) method may be employed to determine the edge identification points of the obstruction. Then, the edge identification points can be used for determining the position information of the shielding object in the current face image.
Step 360, obtaining a mapping relation between the face position and the face data, and carrying out mask processing on the face data in the preset face database according to the position information and the mapping relation.
In this step, optionally, a face recognition model may be trained using a historical face image, and a mapping relationship between a face position and face data may be obtained according to the trained model. The historical face image may include normal face images and abnormal face images. The abnormal face image may be a face image with a dislocation of five sense organs. Then, the key area of the face shielded by the shielding object can be obtained according to the position information and the mapping relation. Finally, mask processing can be performed on face data in a preset face database according to the position of the shielding object in the current face image.
By way of example, if the shielding object is at the middle position of the current face image, the nose can be considered to be shielded by the shielding object, and mask processing can be performed on the nose of the face data at the moment, so that the face data and the current face image have the same shielding effect, the effect that the shielding object influences the face recognition result is avoided, and the accuracy of face recognition is improved.
And 370, carrying out face recognition on the current face image again according to the mask result.
And 380, when the face recognition is performed again, if the current face image is matched with the second target face data in the preset face database, acquiring the current face characteristics corresponding to the current face image.
Step 390, updating the second target face data in the preset face database according to the preset updating proportion, the current face characteristics and the second target face data; or generating a face database with the obstructers according to the current face characteristics so as to carry out face recognition on the face image with the obstructers according to the face database with the obstructers.
According to the technical scheme, when the shielding object exists in the current face image, the position information of the shielding object in the current face image is determined; acquiring a mapping relation between a face position and face data, and masking the face data in a preset face database according to the position information and the mapping relation; carrying out face recognition again on the current face image according to the mask result; when the face recognition is carried out again, if the current face image is matched with the second target face data in the preset face database, acquiring the current face characteristics corresponding to the current face image; updating second target face data in a preset face database according to the preset updating proportion, the current face characteristics and the second target face data; or generating a face database with the obstructers according to the current face characteristics, so as to solve the problem that the current face image is not matched with the first target face data in the preset face database due to the changes of illumination, posture, beard, wearing mask, wearing sunglasses or age by the technical means of carrying out face recognition on the face image with the obstructers according to the face database with the obstructers. By carrying out mask processing on the face data in the preset face database, the situation that the face recognition accuracy is poor due to the fact that a shielding object exists in the current face image is avoided, and the face recognition accuracy is improved. Secondly, by generating the face database with the obstructers, when the existence of the obstructers in the current face image is detected, the face recognition of the face image with the obstructers can be directly carried out through the face database with the obstructers, and the recognition efficiency of the face image with the obstructers is improved.
In a preferred embodiment, taking an application scenario of a face recognition method as an automobile as an example, fig. 6 is a flowchart of a target face data updating method provided in an embodiment of the present invention.
As shown in fig. 6, after extracting the current face feature, the current face image is matched with each face data in the preset face database, so as to obtain the current matching score. And judging whether the current matching score is larger than a preset score threshold value. If yes, prompting that face recognition is successful, recording the current matching score, and if the current matching score is smaller than the preset proportion of the last matching score, updating the first target face data. If not, carrying out face recognition failure prompt and carrying out shielding object recognition. And carrying out shelter removal prompt according to the attribute type, and acquiring a face image from which the shelter is removed. And acquiring the current face characteristics of the face image from which the shielding object is removed, and carrying out face recognition again according to the current face characteristics. And if the face recognition is successful, updating the second target face data according to the current face features.
The personal information processing system has the advantages that after the user performs identity verification through face recognition, personalized settings can be performed according to the use records of the user, such as automatic adjustment of the angle of the seat, starting of an infotainment function and the like, so that the use experience of the user is improved.
In the technical scheme of the embodiment of the invention, the acquisition, storage, application and the like of the related user personal information (such as face information, voice information and the like) accord with the regulations of related laws and regulations, and the public order welcome is not violated.
Example IV
Fig. 7 is a schematic structural diagram of a face recognition device according to a fourth embodiment of the present invention.
As shown in fig. 7, the apparatus includes:
a detection result obtaining module 71, configured to obtain a detection result of performing face recognition on a current face image through a preset face database;
the previous face image determining module 72 is configured to determine a previous face image successfully matched with the first target face data in the preset face database when the detection result is that the current face image is matched with the first target face data in the preset face database;
a matching score obtaining module 73, configured to obtain a current matching score and a previous matching score when the previous face image and the current face image are respectively matched with the first target face data;
the first target face data updating module 74 is configured to update the first target face data in the preset face database according to the current face image when the current matching score and the last matching score satisfy the preset difference condition.
According to the technical scheme, through the mutual matching of the detection result acquisition module, the last face image determination module, the matching score acquisition module and the first target face data updating module, the problem that the matching of the current face image and the first target face data in the preset face database is not passed due to the slow change of illumination, gesture, beard or age is solved, the face data in the preset face database can be dynamically updated, and the face recognition accuracy is improved.
Optionally, the first target face data updating module 74 includes:
the current face feature acquisition unit is used for acquiring current face features corresponding to the current face image;
the first target face data updating unit is used for updating the first target face data in the preset face database according to the preset updating proportion, the current face characteristics and the first target face data.
Optionally, the apparatus further includes a face re-recognition module, the module including:
the occlusion object detection unit is used for detecting whether an occlusion object exists in the current face image or not when the detection result is that the current face image is not matched with all face data in the preset face database;
The attribute type determining unit is used for carrying out attribute identification on the occlusion object when the occlusion object exists in the current face image, and determining the attribute type of the occlusion object;
the occlusion object removing prompting unit is used for performing occlusion object removing prompting when an occlusion object exists in the current face image; performing a occlusion removal cue, comprising: according to the attribute category, carrying out shelter removal prompt;
the face re-recognition unit is used for acquiring the face image with the occlusion object removed and re-recognizing the face according to the face image with the occlusion object removed and a preset face database;
the device comprises a shielding object position determining unit, a shielding object position determining unit and a shielding object position determining unit, wherein the shielding object position determining unit is used for determining the position information of the shielding object in the current face image when the shielding object exists in the current face image;
the mask processing unit is used for acquiring the mapping relation between the face position and the face data and carrying out mask processing on the face data in the preset face database according to the position information and the mapping relation;
the face recognition unit is used for carrying out face recognition on the current face image again according to the mask result;
the second target face data updating unit is used for updating the second target face data according to the current face image if the current face image is matched with the second target face data in the preset face database and passes when the face recognition is performed again;
The current face feature acquisition unit is used for acquiring current face features corresponding to the current face image;
the face recognition unit with shielding is used for updating the second target face data in the preset face database according to the preset updating proportion, the current face characteristics and the second target face data; or generating a face database with the obstructers according to the current face characteristics so as to carry out face recognition on the face image with the obstructers according to the face database with the obstructers.
The face recognition device provided by the embodiment of the invention can execute the face recognition method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example five
Fig. 8 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 8, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as face recognition methods.
In some embodiments, the face recognition method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the face recognition method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the face recognition method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A face recognition method, comprising:
acquiring a detection result of face recognition on a current face image through a preset face database;
when the detection result is determined that the current face image is matched with first target face data in the preset face database, determining a last face image successfully matched with the first target face data;
acquiring a current matching score and a previous matching score when the previous face image and the current face image are respectively matched with the first target face data;
And when the current matching score and the last matching score meet a preset difference condition, updating the first target face data in the preset face database according to the current face image.
2. The method of claim 1, wherein updating the first target face data in the preset face database based on the current face image comprises:
acquiring current face features corresponding to the current face image;
and updating the first target face data in the preset face database according to a preset updating proportion, the current face characteristics and the first target face data.
3. The method as recited in claim 1, further comprising:
when the detection result is that the current face image is not matched with all face data in the preset face database, detecting whether a shielding object exists in the current face image;
when a shielding object exists in the current face image, carrying out shielding object removal prompt;
and acquiring a face image of which the shielding object is removed, and carrying out face recognition again according to the face image of which the shielding object is removed and the preset face database.
4. A method according to claim 3, further comprising, after detecting whether an occlusion is present in the current face image:
when a shielding object exists in the current face image, carrying out attribute identification on the shielding object, and determining the attribute category of the shielding object;
performing a occlusion removal cue, comprising:
and carrying out shelter removal prompt according to the attribute category.
5. The method as recited in claim 1, further comprising:
detecting whether a shielding object exists in the current face image or not when the detection result is that the current face image is not matched with all face data in the preset face database;
when a shielding object exists in the current face image, determining the position information of the shielding object in the current face image;
acquiring a mapping relation between a face position and face data, and carrying out mask processing on the face data in the preset face database according to the position information and the mapping relation;
and carrying out face recognition again on the current face image according to the mask result.
6. The method according to any one of claims 3-5, further comprising, after the re-face recognition:
And when the face recognition is performed again, if the current face image is matched with the second target face data in the preset face database, updating the second target face data according to the current face image.
7. The method of claim 6, wherein updating the second target face data based on the current face image comprises:
acquiring current face features corresponding to the current face image;
updating the second target face data in the preset face database according to a preset updating proportion, the current face characteristics and the second target face data; or,
and generating a face database with the obstructers according to the current face characteristics so as to carry out face recognition on the face image with the obstructers according to the face database with the obstructers.
8. A face recognition device, comprising:
the detection result acquisition module is used for acquiring a detection result of face recognition on the current face image through a preset face database;
the last face image determining module is used for determining a last face image successfully matched with the first target face data when the detection result is that the current face image is matched with the first target face data in the preset face database;
The matching score acquisition module is used for acquiring a current matching score and a last matching score when the last face image and the current face image are respectively matched with the first target face data;
and the first target face data updating module is used for updating the first target face data in the preset face database according to the current face image when the current matching score and the last matching score meet a preset difference condition.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the face recognition method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the face recognition method of any one of claims 1-7.
CN202311317570.9A 2023-10-11 2023-10-11 Face recognition method, device, equipment and storage medium Pending CN117373082A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311317570.9A CN117373082A (en) 2023-10-11 2023-10-11 Face recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311317570.9A CN117373082A (en) 2023-10-11 2023-10-11 Face recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117373082A true CN117373082A (en) 2024-01-09

Family

ID=89388533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311317570.9A Pending CN117373082A (en) 2023-10-11 2023-10-11 Face recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117373082A (en)

Similar Documents

Publication Publication Date Title
WO2019179029A1 (en) Electronic device, identity verification method and computer-readable storage medium
CN113343826B (en) Training method of human face living body detection model, human face living body detection method and human face living body detection device
US10169640B2 (en) Method for fingerprint identification and terminal
US10922399B2 (en) Authentication verification using soft biometric traits
US11893831B2 (en) Identity information processing method and device based on fundus image
WO2022174699A1 (en) Image updating method and apparatus, and electronic device and computer-readable medium
CN113221086B (en) Offline face authentication method and device, electronic equipment and storage medium
US20200204546A1 (en) Apparatus, method and computer program product for biometric recognition
CN113221771A (en) Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product
CN109858366A (en) Identity identifying method and device
EP3862895A1 (en) Biometric identification device, biometric identification method, and biometric identification program
KR100847142B1 (en) Preprocessing method for face recognition, face recognition method and apparatus using the same
CN113269719A (en) Model training method, image processing method, device, equipment and storage medium
CN109858355B (en) Image processing method and related product
CN114519898A (en) Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment
Agrawal et al. A hybrid partial fingerprint matching algorithm for estimation of equal error rate
Habeeb Comparison between physiological and behavioral characteristics of biometric system
CN108288023B (en) Face recognition method and device
CN117373082A (en) Face recognition method, device, equipment and storage medium
KR101037759B1 (en) Robust Face Recognition Method using AAM and Gabor Feature Vectors
Chowdhury et al. Fuzzy rule based approach for face and facial feature extraction in biometric authentication
CN111428670B (en) Face detection method, face detection device, storage medium and equipment
EP4167179A1 (en) Authentication method, authentication program, and information processing device
CN109815359B (en) Image retrieval method and related product
CN114332905A (en) Biological characteristic multi-mode fusion recognition method and device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination