CN111597910A - Face recognition method, face recognition device, terminal equipment and medium - Google Patents

Face recognition method, face recognition device, terminal equipment and medium Download PDF

Info

Publication number
CN111597910A
CN111597910A CN202010321650.1A CN202010321650A CN111597910A CN 111597910 A CN111597910 A CN 111597910A CN 202010321650 A CN202010321650 A CN 202010321650A CN 111597910 A CN111597910 A CN 111597910A
Authority
CN
China
Prior art keywords
face
detected
feature information
feature
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010321650.1A
Other languages
Chinese (zh)
Inventor
王维治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Infineon Information Co ltd
Original Assignee
Shenzhen Infinova Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Infinova Intelligent Technology Co Ltd filed Critical Shenzhen Infinova Intelligent Technology Co Ltd
Priority to CN202010321650.1A priority Critical patent/CN111597910A/en
Publication of CN111597910A publication Critical patent/CN111597910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The application is applicable to the technical field of artificial intelligence, and provides a face recognition method, a face recognition device, terminal equipment and a medium, wherein the method comprises the following steps: collecting a picture to be detected, and extracting a face to be detected in the picture to be detected; if the face to be detected is not shielded, identifying the unshielded face to be detected by adopting a preset first feature library, wherein the first feature library stores unshielded first face feature information; if the face to be detected is shielded, recognizing the shielded face to be detected by adopting a preset second feature library, wherein shielded second face feature information is stored in the second feature library, and the first face feature information and the second face feature information are extracted from the face picture which is not shielded. By the method, the accuracy of the identification of the shielded face can be improved.

Description

Face recognition method, face recognition device, terminal equipment and medium
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a face recognition method, a face recognition device, a terminal device and a medium.
Background
The face recognition is a biological recognition technology for identity recognition based on face feature information of a person, and can be used for tracking the track of the person, for example, the track of a confirmed patient can be tracked according to face information shot and recognized by an intelligent recognition camera during an epidemic situation.
However, in some cases, the human face recognition may be inaccurate due to the occlusion of the human face. For example, in an epidemic situation period, epidemic situation prevention and control measures generally require people to wear a mask in public places, and the human face shielding caused by wearing the mask interferes with the human face recognition.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition device, terminal equipment and a medium, and can improve the accuracy of the face recognition of a shielded face.
In a first aspect, an embodiment of the present application provides a face recognition method, including:
collecting a picture to be detected, and extracting a face to be detected in the picture to be detected;
if the face to be detected is not shielded, identifying the unshielded face to be detected by adopting a preset first feature library, wherein the first feature library stores unshielded first face feature information;
if the face to be detected is shielded, recognizing the shielded face to be detected by adopting a preset second feature library, wherein shielded second face feature information is stored in the second feature library, and the first face feature information and the second face feature information are extracted from the face picture which is not shielded.
In a second aspect, an embodiment of the present application provides a face recognition apparatus, including:
the face extraction module to be detected is used for collecting a picture to be detected and extracting a face to be detected in the picture to be detected;
the first identification module is used for identifying the unshielded face to be detected by adopting a preset first feature library if the face to be detected is unshielded, wherein the first feature library stores unshielded first face feature information;
and the second identification module is used for identifying the blocked face to be detected by adopting a preset second feature library if the face to be detected is blocked, the second feature library stores blocked second face feature information, and the first face feature information and the second face feature information are extracted from the unblocked face picture.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to execute the method described in the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, the terminal device may include a first feature library and a second feature library, the first feature library may include unoccluded first facial feature information, and the second feature library may include occluded second facial feature information. When the face recognition is carried out, the picture can be collected firstly, then the face to be detected is extracted from the collected picture, and then whether the face to be detected is shielded or not is judged; if the face to be detected is not shielded, the first feature library can be adopted to identify the face to be detected which is not shielded, because the feature information of the face which is not shielded is stored in the first feature library, the face to be detected which is not shielded is compared with the feature information of the first face which is not shielded, so that more feature information is needed to be compared, and the comparison result is more accurate; if the face to be detected is shielded, the second feature library can be adopted to identify the shielded face to be detected, and because the feature information of the face is shielded in the identification stored in the second face library, the shielded face to be detected and the feature information of the shielded face are compared, so that the comparison of some interference information can be avoided, and meanwhile, the shielded face can be identified. In this embodiment, different feature libraries are used to identify the face to be detected according to whether the face to be detected is blocked, so that on one hand, when comparing the faces which are not blocked, more information can be compared; on the other hand, if the shielded face to be detected and the unshielded face are adopted for comparison, the face to be detected cannot be recognized because the difference between the shielded area and the unshielded area of the face to be detected is too large.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a face recognition method according to a second embodiment of the present application;
fig. 3 is a schematic flow chart of a face recognition method according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a face recognition apparatus according to a fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to a fifth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1 is a schematic flow chart of a face recognition method provided in an embodiment of the present application, and as shown in fig. 1, the method includes:
s101, collecting a picture to be detected, and extracting a face to be detected in the picture to be detected;
the execution main body of this embodiment is a terminal device, and the terminal device may be a face recognition camera, a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like.
The picture to be detected comprises a face to be detected, and the picture to be detected can be acquired in real time through camera equipment. Specifically, the image to be detected may be acquired by using a camera, and because the image to be detected may include some background information in addition to the face to be detected, the face to be detected may be extracted from the image to be detected by using a face extraction algorithm.
Illustratively, the camera can perform face detection, face filtering and face optimization in real time, and output a face picture with the best effect from the moment a face enters the visual field of the camera to the moment the face leaves the visual field of the camera. When the human face is collected, different filtering conditions can be configured according to different scenes, and the filtering conditions can include a front face angle threshold value of the human face, a size threshold value of the human face, a motion speed threshold value of the human face, a blurring degree threshold value of the human face, a human face detection scoring threshold value and the like. The face picture with better effect is collected, and the accuracy of face recognition can be improved. After the picture is collected, the picture usually contains information such as a background picture, and a preset face extraction algorithm can be adopted to extract a face to be detected in the collected picture. The face extraction algorithm can be obtained by carrying out deep learning training on some sample pictures.
S102, if the face to be detected is not shielded, identifying the unshielded face to be detected by adopting a preset first feature library, wherein the first feature library stores unshielded first face feature information;
the first face feature information may include features of various parts of the occluded face, such as positions and shapes of eyes, mouth, nose, eyebrows, and the like, and may further include a facial contour, a skin state, and the like. The first feature library is composed of a plurality of first face feature information, and each first face feature information corresponds to one recognized face. Specifically, a plurality of unoccluded face pictures can be collected in advance and stored in a face library, then first face feature information of an unoccluded face is extracted by adopting a feature extraction algorithm, and then each first face feature information is stored in the first feature library.
Specifically, when a face to be detected is extracted, whether the face to be detected is shielded or not can be judged by adopting a preset attribute extraction algorithm, if the face to be detected is not shielded, the feature information of the face to be detected can be extracted, then the feature information of the face to be detected is compared with each first feature information in a first feature library, and when the similarity between the feature information of the face to be detected and certain first face feature information is higher than a threshold value, the face to be detected and the face corresponding to the first face feature information can be identified as the same face.
The attribute extraction algorithm can be obtained by pre-training a deep learning algorithm, and the picture is input into the trained attribute extraction algorithm, so that the attribute of the picture can be output as being not shielded or shielded. Therefore, whether the face to be detected is shielded or not can be judged through the attribute extraction algorithm.
S103, if the face to be detected is shielded, recognizing the shielded face to be detected by adopting a preset second feature library, wherein shielded second face feature information is stored in the second feature library, and the first face feature information and the second face feature information are extracted from the face picture which is not shielded.
The second face feature information includes features of the occluded face, which may include, for example, positions and shapes of eyes and eyebrows; but also facial contours, whether there are moles or other landmarks on the face, etc. The second feature library is composed of a plurality of second face feature information, and each second face feature information corresponds to one recognized face. Specifically, a plurality of human face pictures which are not shielded can be collected in advance, then the human face pictures which are not shielded are shielded, then the second human face feature information of the shielded human face is extracted by adopting a feature extraction algorithm, and then each second human face feature information is stored in the second feature library.
Specifically, when the face to be detected is extracted, a preset attribute extraction algorithm may be used to determine whether the face to be detected is blocked, if the face to be detected is blocked, the feature information of the face to be detected may be extracted, and then the feature information of the face to be detected is compared with each second feature information in the second feature library, and when the similarity between the feature information of the face to be detected and a certain second face feature information is higher than a threshold, the face to be detected corresponding to the second face feature information and the face to be detected may be identified as the same face.
It should be noted that, when extracting the first face feature information and the second face feature information, the same non-occluded face picture may be used to obtain the first face feature information and the second face feature information, and the non-occluded face pictures may be stored in the face library. The terminal device stores a plurality of recognized face pictures, stores corresponding unoccluded first face feature information in a first feature library, and stores corresponding occluded second face feature information in a second feature library.
In this embodiment, different feature libraries may be used to identify the face to be detected according to whether the face to be detected is blocked. Comparing the non-shielded human face to be detected with the non-shielded first human face characteristic information, wherein because both the two compared human faces are not shielded, more characteristic comparison can be carried out; if partial area of the face to be detected is shielded, the partial area is directly compared with the face which is not shielded, even if the recognized face corresponding to the face to be detected exists, the similarity of the two faces is low, and the error of face recognition is large; the shielded face to be detected is compared with the shielded second face characteristic information, and because the characteristic of the face to be detected and the second face characteristic information are both the information that the face is shielded, the interference of a shielded area can be avoided when the characteristic comparison is carried out, and the accuracy of the shielded face identification is improved.
Fig. 2 is a schematic flow chart of a face recognition method provided in the second embodiment of the present application, and as shown in fig. 2, the method includes:
s201, collecting a human face picture which is not shielded;
the execution main body of this embodiment is a terminal device, and the terminal device may be a face recognition camera, a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like.
Specifically, the first feature library and the second feature library in the terminal device are obtained by extracting feature information from a plurality of pictures. Therefore, some face pictures which are not shielded can be collected in advance, and the face pictures can be stored in the terminal equipment as the faces which are recognized. Illustratively, a face picture uploaded by a user can be received or obtained from an open database.
S202, extracting first face characteristic information from the unobstructed face picture, and storing the first face characteristic information into the first characteristic library;
specifically, a feature extraction algorithm is adopted to extract features of a face picture collected in advance, and because the face is not shielded, first face feature information of which the face is not shielded can be extracted. And storing the first face feature information extracted according to each face picture in a first feature library.
Specifically, when feature extraction is performed, 5 key point positions of the human face (including 2 eyes, 1 nose, and 2 sides of the mouth) may be detected, and a feature code of the human face may be extracted as first human face feature information. The feature extraction algorithm can be obtained by performing model training under a deep learning framework, and the deep learning framework can be caff. For example, sample data pictures can be collected, the positions of key points of the sample data pictures are marked, and the marked sample data pictures are adopted to train the feature extraction algorithm, so that the feature extraction algorithm can extract feature information of human faces, such as facial features, outlines and the like.
S203, processing the unoccluded face picture to shield a target face area;
specifically, in order to obtain an occluded face picture, an unoccluded face picture may be processed. For example, if a face picture of a wearer is desired, a part from the nose to the mouth of the face that is not blocked can be fixed and filled with a certain fixed color, which is equivalent to simulating the wearer's face.
S204, extracting second face feature information from the picture which covers the target face area, and storing the second face feature information into the second feature library;
specifically, feature extraction is performed on the face image which covers the target face area by using a feature extraction algorithm to obtain second face feature information, and the second face feature information is stored in a second feature library.
When the second face feature information is extracted, a plurality of shielded face pictures can be adopted to train the face feature extraction algorithm in advance, and when the characteristic extraction is carried out on the shielded face by adopting the trained face feature extraction algorithm, the second face feature information can be output.
S205, collecting a picture to be detected, and extracting a face to be detected in the picture to be detected;
the above-mentioned steps S201 to S204 are steps completed in advance before face recognition, and do not need to be operated once every time face recognition is performed.
When the face recognition is performed, a camera can be used for collecting a picture to be detected, and then a face extraction algorithm is adopted for extracting the face to be detected from the picture.
When the picture is collected, the camera can be set, and the face picture with better effect is selected.
S206, extracting the characteristic information of the face to be detected;
specifically, a feature extraction algorithm is adopted to extract feature information of the face to be detected. The extracted facial feature information may include positions and shapes of organs such as mouth, nose, eyes, eyebrows and the like, may include facial contours, and may also include marks of the face, such as moles, birthmarks and the like.
S207, if the feature information comprises feature information of all preset face position points, judging that the face to be detected is not shielded; otherwise, judging that the face to be detected is shielded;
the location points may include various parts of the human face, such as eyes, eyebrows, mouth, nose, and the like. Specifically, whether the face is occluded or not is determined, and the determination may be performed according to whether the extracted feature information includes features of each position point of the face. For example, feature information of two eyes, a nose and two lips in an unshielded face can be preset, and when detection is performed, if the face to be detected contains feature information of all preset positions, it is indicated that the face to be detected is unshielded; and if the face to be detected does not contain the characteristic information of all the preset positions, the fact that the face to be detected is shielded is indicated. For example, when a face wears a mask, the feature information of the nose and lips is generally not extracted, and it can be determined that the face is occluded.
S208, if the face to be detected is not shielded, identifying the unshielded face to be detected by adopting a preset first feature library, wherein the first feature library stores unshielded first face feature information;
specifically, if the face to be detected is not blocked, the feature information of the face to be detected can be compared with the first face feature information, and if one first face feature information is particularly similar to the face to be detected, the face corresponding to the first face feature information can be considered to be the same as the face to be detected.
For example, the extracted feature information of the face to be detected and the first face feature information can both be feature vectors, and the similarity between the face to be detected and the face corresponding to the first face feature information can be judged by calculating the similarity between the two feature vectors. And respectively calculating the similarity between each piece of first face feature information and the face to be detected, selecting a maximum similarity value, and if the maximum similarity value is greater than a preset similarity threshold value, judging that the face associated with the first face feature information corresponding to the maximum similarity value is the same as the face to be detected.
S209, if the face to be detected is shielded, identifying the shielded face to be detected by adopting a preset second feature library, wherein shielded second face feature information is stored in the second feature library, and the first face feature information and the second face feature information are extracted from an unobscured face picture;
specifically, if the face to be detected is blocked, the feature information of the face to be detected may be compared with the second face feature information, and if there is a second face feature information that is particularly similar to the face to be detected, it may be considered that the face corresponding to the second face feature information is the same as the face to be detected.
For example, the extracted face feature information to be detected and the second face feature information may both be feature vectors, and the similarity between the two feature vectors may be calculated to determine the similarity between the face to be detected and the face corresponding to the second face feature information. And respectively calculating the similarity between each piece of second face feature information and the face to be detected, selecting a maximum similarity value, and if the maximum similarity value is greater than a preset similarity threshold value, judging that the face associated with the second face feature information corresponding to the maximum similarity value is the same as the face to be detected. The similarity threshold of the second feature library can be set to be higher, and because the feature information contained in the occluded human face is less, the accuracy of the occluded human face recognition can be improved by setting a higher similarity threshold.
S210, under a first environment, prompting the identified personnel who do not shield the target face area;
in particular, in some cases, it may be required that the face must be occluded. For example, during an epidemic situation, many government offices stipulate that a mask must be worn for going out. In this case, when a face not wearing the mask is recognized, a prompt may be issued to remind the resident of wearing the mask.
And S211, under the second environment, prompting the identified personnel blocking the target face area.
In other cases, it is desirable that the face be totally exposed. For example, when security check is performed, the whole face needs to be exposed, and in this case, if the recognized face is blocked, the person can be prompted to expose the whole face.
In this embodiment, according to the extracted feature information, whether the face is occluded or not can be judged; under the environment requiring face shielding, prompting can be carried out on the personnel not shielding the face; under the environment that the human face cannot be shielded, the personnel shielded with the human face can be prompted, so that the workload of the personnel can be reduced.
Fig. 3 is a schematic flow chart of a face recognition method provided in the third embodiment of the present application, and as shown in fig. 3, the method includes:
s301, collecting a picture to be detected, and extracting a face to be detected in the picture to be detected;
the execution main body of this embodiment is a terminal device, and the terminal device may be a face recognition camera, a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like.
Specifically, when face recognition is performed, a camera may be used to collect a picture, and then a face to be detected in the picture is extracted.
S302, extracting the characteristic information of the face to be detected;
specifically, when the face to be detected is identified, the face to be detected and the identified face may be subjected to feature comparison, so that feature information of the face may be extracted by using a feature extraction algorithm.
S303, if the feature information comprises feature information of all preset face position points, judging that the face to be detected is not shielded; otherwise, judging that the face to be detected is shielded;
specifically, whether the face to be detected is blocked or not can be judged according to the quantity of the feature information contained in the face to be detected. For example, when a face to be detected wears a mask, the feature information of the face to be detected generally does not include the feature information of the nose and the mouth; when the face to be detected is worn with sunglasses, the feature information of the face to be detected generally does not include the feature information of the eyes.
The feature information that the face that is not occluded should include may be set in the terminal device in advance, and for example, the face that is not occluded may be set such that two eyes, one nose, and two lips are detected. When the feature information of the face to be detected comprises the feature information of two eyes, one nose and two lips, the face to be detected can be judged not to be shielded; when the feature information of a certain face part is lacked in the feature information of the face to be detected, it can be judged that the face to be detected is blocked.
S304, if the face to be detected is not shielded, respectively calculating the similarity between the feature information and each first face feature information in the first feature library;
specifically, when the face to be detected is not shielded, the face to be detected can detect the feature information of all the position points, and the first face feature information also contains the feature information of all the position points, so that the feature information of the face to be detected can be compared with the first face feature information, and the similarity between the feature information of the face to be detected and the first face feature information is calculated.
S305, when first target face feature information with first similarity larger than preset first maximum similarity with the feature information is obtained through calculation, judging that the face to be detected is the same as the face corresponding to the first target face feature information;
the first maximum similarity may be set by a user and is generally larger relative to the first feature library. And respectively calculating the similarity between the feature information of the face to be detected and each first face feature information, and judging that the face to be detected is the same as the face corresponding to the first face feature information when the similarity between the feature information of the face to be detected and a certain first face feature information is greater than a first maximum similarity. The similarity calculation between the feature information of the face to be detected and the feature information of the other first face can be terminated.
S306, if the first similarity of each piece of first face feature information and the feature information is smaller than the first maximum similarity, and the maximum value of the first similarity is larger than a preset first minimum similarity, determining that the face to be detected is the same as the face corresponding to the maximum value of the first similarity;
the first minimum similarity is set by the user and is generally smaller than the first maximum similarity with respect to the first feature library.
When the similarity between each first face feature information in the first feature library and the face to be detected is smaller than the first maximum similarity, and the first face feature information with the similarity larger than the first minimum similarity with the feature information of the face to be detected exists, the face corresponding to the first face feature information corresponding to the maximum similarity can be judged to be the same as the face to be detected.
In addition, there is a case where the similarity between each piece of first face feature information in the first feature library and the face to be detected is smaller than the first minimum similarity, and at this time, it may be determined that the recognition fails, and the face is not stored in the terminal device. Of course, if the face is not occluded at this time, the face image may also be collected, and then the face image is added to the face library, and the first face feature information and the second face feature information of the face are correspondingly extracted for the subsequent face recognition.
S307, if the face to be detected is shielded, respectively calculating the similarity between the feature information and each second face feature information in the second feature library;
specifically, when the face to be detected is blocked, the face to be detected can detect the feature information of a part of the position points, and the second face feature information also contains the feature information of a part of the position points, so that the feature information of the face to be detected can be compared with the second face feature information, and the similarity between the feature information of the face to be detected and the second face feature information is calculated.
Illustratively, if the detected face is a face wearing a mask, the face to be detected only contains characteristic information of eyes and eyebrow parts; the second face feature information in the second feature library may be obtained by filling all the information from the nose to the mouth in the face picture with uniform colors and then performing feature extraction, which is equivalent to that the second face feature information is the feature information of the face wearing the mask, so that when the feature information of the face to be detected is compared with the second face feature information, the parts which can be compared are the same.
S308, when second target face feature information with the second similarity larger than a preset second maximum similarity with the feature information is obtained through calculation, judging that the face to be detected is the same as the face corresponding to the second target face feature information;
the second maximum similarity may be set by the user and is generally larger relative to the second feature library. And respectively calculating the similarity between the feature information of the face to be detected and each second face feature information, and when the similarity between the feature information of the face to be detected and certain second face feature information is greater than a second maximum similarity, judging that the face to be detected is the same as the face corresponding to the second face feature information. And then the similarity calculation of the feature information of the face to be detected and other second face feature information can be terminated.
It should be noted that, because the feature information extracted from the occluded face is less than that extracted from the non-occluded face, the second maximum similarity may be greater than the first maximum similarity. The higher maximum similarity threshold has a higher requirement on feature comparison, because the comparison range between the feature information of the shielded human face to be detected and the feature information of the second human face is smaller, for example, when a mask is worn, only the feature information of the upper half part of the face can be compared, and the whole face cannot be compared, so that the higher similarity threshold can be set, the threshold for identifying the same human face is improved, and the accuracy of the shielded human face is improved.
S309, if the second similarity of each second face feature information and the feature information is smaller than the second maximum similarity, and the maximum value of the second similarity is larger than a preset second minimum similarity, determining that the face to be detected is the same as the face corresponding to the maximum value of the second similarity;
the second minimum similarity is set by the user and is generally smaller than the second maximum similarity with respect to the second feature library.
When the similarity between each piece of second face feature information in the second feature library and the face to be detected is smaller than the second maximum similarity, and there is second face feature information whose similarity with the feature information of the face to be detected is greater than the second minimum similarity, it can be determined that the face corresponding to the second face feature information corresponding to the maximum similarity is the same as the face to be detected.
In addition, there is a case where the similarity between each piece of second face feature information in the second feature library and the face to be detected is smaller than the second minimum similarity, and at this time, it may be determined that the recognition fails, and the face is not stored in the terminal device.
S310, collecting current position information and current time information, and storing the current position information, the current time information and the recognized human face in a correlation mode.
Specifically, the terminal device may collect current time information and current position information, and then store the current time information and the current position information in association with the recognized face in the database. The action track of the person corresponding to the face can be analyzed from the positions of the face at each time point stored in the database.
The method is illustrated in a specific example as follows:
in an application scene, a face recognition camera can be deployed, a non-shielded face library, a first feature library storing first face feature information of a non-shielded face and a second feature library storing second face feature information of a shielded face are imported into the camera; configuring a first maximum similarity and a first minimum similarity corresponding to the first feature library; and configuring a second maximum similarity and a second minimum similarity corresponding to the second feature library, wherein the second maximum similarity is greater than the first maximum similarity, and the second minimum similarity is greater than the first minimum similarity.
When a person enters the visual field of the camera, the camera can capture the face of the person, select a picture with the best face effect, then extract the face of the person, perform attribute analysis on the extracted face of the person, and analyze whether the face of the person is blocked; identifying the face in the face library corresponding to the face according to the feature comparison, confirming the identity information of the face, and then associating the acquired time information and position information with the identified face and storing the association in a Secure Digital card (SD).
If the user needs to wear the mask in the current scene, and the face does not wear the mask, an alarm can be given; if the whole face is required to be exposed in the current scene, and the face is shielded, a prompt can be sent.
When the track analysis is needed, the position information of the human face at each time point can be read from the SD card.
In this embodiment, the second maximum similarity is greater than the first maximum similarity, and the second minimum similarity is greater than the first minimum similarity, which is equivalent to that when an occluded face is identified, a stricter feature comparison is required, so that the accuracy of the occluded face identification is improved. In addition, the terminal equipment can collect current information and store the current information in association with the recognized face, and the information can be used for trajectory analysis of people. In addition, when the similarity of the feature comparison is greater than the maximum similarity threshold, the similarity calculation is stopped, and the face recognition efficiency can be improved.
Fig. 4 is a schematic structural diagram of a face recognition apparatus 4 according to a fourth embodiment of the present application;
the face extraction module 41 to be detected is used for collecting a picture to be detected and extracting a face to be detected in the picture to be detected;
the first identification module 42 is configured to identify, if the face to be detected is not occluded, the unoccluded face to be detected by using a preset first feature library, where the first feature library stores unoccluded first face feature information;
the second identifying module 43 is configured to identify the blocked face to be detected by using a preset second feature library if the face to be detected is blocked, where the second feature library stores blocked second face feature information, and the first face feature information and the second face feature information are extracted from an unblocked face picture.
The face recognition apparatus 4 further includes:
the sample acquisition module is used for acquiring the human face picture which is not shielded;
the first face characteristic information extraction module is used for extracting first face characteristic information from the unoccluded face picture and storing the first face characteristic information into the first characteristic library;
the shielding processing module is used for processing the human face picture which is not shielded so as to shield a target human face area;
and the second feature extraction module is used for extracting second face feature information from the picture which covers the target face area and storing the second face feature information into the second feature library.
The face recognition apparatus 4 further includes:
the characteristic information extraction module is used for extracting the characteristic information of the face to be detected;
the shielding judgment module is used for judging that the face to be detected is not shielded if the feature information comprises feature information of all preset face position points; otherwise, judging that the face to be detected is blocked.
The first identification module 42 includes:
the first similarity calculation operator module is used for respectively calculating first similarities between the feature information and each piece of first face feature information in the first feature library;
the first maximum similarity comparison module is used for judging that the face to be detected is the same as the face corresponding to the first target face feature information when the first target face feature information of which the first similarity with the feature information is greater than the preset first maximum similarity is obtained through calculation;
and the first minimum similarity comparison module is used for judging that the face to be detected is the same as the face corresponding to the maximum value of the first similarity if the first similarities of the feature information of each first face and the feature information are smaller than the first maximum similarity and the maximum value of the first similarity is larger than the preset first minimum similarity.
The second identification module 43 includes:
the second similarity calculation submodule is used for calculating second similarities between the feature information and each piece of second face feature information in the second feature library respectively;
the second maximum similarity comparison module is used for judging that the face to be detected is the same as the face corresponding to the second target face feature information when the second target face feature information of which the second similarity with the feature information is greater than the preset second maximum similarity is obtained through calculation;
and the second minimum similarity comparison module is used for judging that the face to be detected is the same as the face corresponding to the maximum value of the second similarity if the second similarities of the second face feature information and the feature information are smaller than the second maximum similarity and the maximum value of the second similarity is larger than the preset second minimum similarity.
The face recognition apparatus 4 further includes:
and the information acquisition module is used for acquiring current position information and current time information and storing the current position information, the current time information and the recognized human face in a correlation manner.
The face recognition apparatus 4 further includes:
the first prompting module is used for prompting the identified personnel who do not shield the target face area under the first environment; alternatively, the first and second electrodes may be,
and the second prompting module is used for prompting the identified personnel shielding the target face area under the second environment.
Fig. 5 is a schematic structural diagram of a terminal device according to a fifth embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: at least one processor 50 (only one shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, the processor 50 implementing the steps in any of the various method embodiments described above when executing the computer program 52.
The terminal device 5 may be a face recognition camera, a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only an example of the terminal device 5, and does not constitute a limitation to the terminal device 5, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The processor 50 may be a Central Processing Unit (CPU), and the processor 50 may be other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5 in other embodiments, such as a plug-in hard disk, a smart card (SMC), a Secure Digital (SD) card, a flash card (FlashCard), and the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer memory, Read-only memory (ROM), random-access memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition method, comprising:
collecting a picture to be detected, and extracting a face to be detected in the picture to be detected;
if the face to be detected is not shielded, identifying the unshielded face to be detected by adopting a preset first feature library, wherein the first feature library stores unshielded first face feature information;
if the face to be detected is shielded, recognizing the shielded face to be detected by adopting a preset second feature library, wherein shielded second face feature information is stored in the second feature library, and the first face feature information and the second face feature information are extracted from the face picture which is not shielded.
2. The method of claim 1, further comprising:
acquiring an unobstructed face picture;
extracting first face characteristic information from the unobstructed face picture, and storing the first face characteristic information into the first characteristic library;
processing the human face picture which is not shielded to shield a target human face area;
and extracting second face feature information from the picture which covers the target face area, and storing the second face feature information into the second feature library.
3. The method according to claim 1 or 2, wherein after extracting the face to be detected in the picture to be detected, the method further comprises:
extracting the characteristic information of the face to be detected;
if the feature information comprises feature information of all preset face position points, judging that the face to be detected is not shielded; otherwise, judging that the face to be detected is blocked.
4. The method according to claim 3, wherein the recognizing the non-occluded face to be detected by using the preset first feature library comprises:
respectively calculating first similarity between the feature information and each first face feature information in the first feature library;
when first target face feature information with first similarity greater than preset first maximum similarity with the feature information is obtained through calculation, judging that the face to be detected is the same as the face corresponding to the first target face feature information;
and if the first similarity of each piece of first face feature information and the feature information is smaller than the first maximum similarity, and the maximum value of the first similarity is larger than a preset first minimum similarity, judging that the face to be detected is the same as the face corresponding to the maximum value of the first similarity.
5. The method according to claim 1, 2 or 4, wherein the recognizing the occluded face to be detected by using a preset second feature library comprises:
respectively calculating second similarity of the feature information and each second face feature information in the second feature library;
when second target face feature information with second similarity larger than preset second maximum similarity with the feature information is obtained through calculation, judging that the face to be detected is the same as the face corresponding to the second target face feature information;
and if the second similarity of each second face feature information and the feature information is smaller than the second maximum similarity, and the maximum value of the second similarity is larger than a preset second minimum similarity, determining that the face to be detected is the same as the face corresponding to the maximum value of the second similarity.
6. The method of claim 5, further comprising:
acquiring current position information and current time information, and storing the current position information, the current time information and the recognized human face in a correlation manner.
7. The method of claim 1, further comprising:
under a first environment, prompting the identified personnel who do not shield the target face area; alternatively, the first and second electrodes may be,
and under the second environment, prompting the identified personnel blocking the target face area.
8. A face recognition apparatus, comprising:
the face extraction module to be detected is used for collecting a picture to be detected and extracting a face to be detected in the picture to be detected;
the first identification module is used for identifying the unshielded face to be detected by adopting a preset first feature library if the face to be detected is unshielded, wherein the first feature library stores unshielded first face feature information;
and the second identification module is used for identifying the blocked face to be detected by adopting a preset second feature library if the face to be detected is blocked, the second feature library stores blocked second face feature information, and the first face feature information and the second face feature information are extracted from the unblocked face picture.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010321650.1A 2020-04-22 2020-04-22 Face recognition method, face recognition device, terminal equipment and medium Pending CN111597910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010321650.1A CN111597910A (en) 2020-04-22 2020-04-22 Face recognition method, face recognition device, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010321650.1A CN111597910A (en) 2020-04-22 2020-04-22 Face recognition method, face recognition device, terminal equipment and medium

Publications (1)

Publication Number Publication Date
CN111597910A true CN111597910A (en) 2020-08-28

Family

ID=72189100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010321650.1A Pending CN111597910A (en) 2020-04-22 2020-04-22 Face recognition method, face recognition device, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN111597910A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115886A (en) * 2020-09-22 2020-12-22 北京市商汤科技开发有限公司 Image detection method and related device, equipment and storage medium
CN112115866A (en) * 2020-09-18 2020-12-22 北京澎思科技有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
JPWO2022064565A1 (en) * 2020-09-23 2022-03-31
CN115240265A (en) * 2022-09-23 2022-10-25 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium
CN115471944A (en) * 2022-08-08 2022-12-13 国网河北省电力有限公司建设公司 Warehouse access lock control method, device and system and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815845A (en) * 2018-12-29 2019-05-28 深圳前海达闼云端智能科技有限公司 Face recognition method and device and storage medium
CN109919003A (en) * 2019-01-23 2019-06-21 平安科技(深圳)有限公司 Face identification method, terminal device and computer readable storage medium
CN110232369A (en) * 2019-06-20 2019-09-13 深圳和而泰家居在线网络科技有限公司 A kind of face identification method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815845A (en) * 2018-12-29 2019-05-28 深圳前海达闼云端智能科技有限公司 Face recognition method and device and storage medium
CN109919003A (en) * 2019-01-23 2019-06-21 平安科技(深圳)有限公司 Face identification method, terminal device and computer readable storage medium
CN110232369A (en) * 2019-06-20 2019-09-13 深圳和而泰家居在线网络科技有限公司 A kind of face identification method and electronic equipment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115866A (en) * 2020-09-18 2020-12-22 北京澎思科技有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN112115886A (en) * 2020-09-22 2020-12-22 北京市商汤科技开发有限公司 Image detection method and related device, equipment and storage medium
WO2022062379A1 (en) * 2020-09-22 2022-03-31 北京市商汤科技开发有限公司 Image detection method and related apparatus, device, storage medium, and computer program
JP2022552754A (en) * 2020-09-22 2022-12-20 ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド IMAGE DETECTION METHOD AND RELATED DEVICE, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM
JPWO2022064565A1 (en) * 2020-09-23 2022-03-31
WO2022064565A1 (en) * 2020-09-23 2022-03-31 日本電気株式会社 Comparison device, comparison method, and program
JP7272510B2 (en) 2020-09-23 2023-05-12 日本電気株式会社 Verification device, verification method, program
CN115471944A (en) * 2022-08-08 2022-12-13 国网河北省电力有限公司建设公司 Warehouse access lock control method, device and system and readable storage medium
CN115240265A (en) * 2022-09-23 2022-10-25 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium
CN115240265B (en) * 2022-09-23 2023-01-10 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111597910A (en) Face recognition method, face recognition device, terminal equipment and medium
CN108701216B (en) Face recognition method and device and intelligent terminal
CN111523480B (en) Method and device for detecting face obstruction, electronic equipment and storage medium
CN106133752B (en) Eye gaze tracking
CN111563480B (en) Conflict behavior detection method, device, computer equipment and storage medium
CN107958230B (en) Facial expression recognition method and device
WO2019245768A1 (en) System for predicting articulated object feature location
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
CN109299658B (en) Face detection method, face image rendering device and storage medium
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN110472613B (en) Object behavior identification method and device
CN108197318A (en) Face identification method, device, robot and storage medium
CN108108711B (en) Face control method, electronic device and storage medium
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
CN109766755A (en) Face identification method and Related product
CN112364827A (en) Face recognition method and device, computer equipment and storage medium
CN110610127A (en) Face recognition method and device, storage medium and electronic equipment
CN111062328A (en) Image processing method and device and intelligent robot
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN114677607A (en) Real-time pedestrian counting method and device based on face recognition
CN111738078A (en) Face recognition method and device
CN110175553B (en) Method and device for establishing feature library based on gait recognition and face recognition
Rusli et al. Evaluating the masked and unmasked face with LeNet algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230103

Address after: 518000 Yingfei Haocheng Science Park, Guansheng 5th Road, Luhu Community, Guanhu Street, Longhua District, Shenzhen, Guangdong 1515

Applicant after: Shenzhen Infineon Information Co.,Ltd.

Address before: 518000 Room 301, Infineon Technology Co., Ltd., No. 12, Guanbao Road, Luhu community, Guanhu street, Longhua District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN INFINOVA INTELLIGENT TECHNOLOGY Co.,Ltd.