CN115601817A - Face recognition method and device, processor and electronic equipment - Google Patents

Face recognition method and device, processor and electronic equipment Download PDF

Info

Publication number
CN115601817A
CN115601817A CN202211362522.7A CN202211362522A CN115601817A CN 115601817 A CN115601817 A CN 115601817A CN 202211362522 A CN202211362522 A CN 202211362522A CN 115601817 A CN115601817 A CN 115601817A
Authority
CN
China
Prior art keywords
face
face image
image
feature vector
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211362522.7A
Other languages
Chinese (zh)
Inventor
朱菲
王超
姜俊萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202211362522.7A priority Critical patent/CN115601817A/en
Publication of CN115601817A publication Critical patent/CN115601817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a face recognition method and device, a processor and electronic equipment, and relates to the technical field of artificial intelligence, wherein the method comprises the following steps: acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image; performing face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard; carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image; and performing face recognition according to the first face feature vector to determine the identity information of the target object. By the method and the device, the problem that the face recognition accuracy is low due to the fact that the face recognition method in the related technology is easily influenced by the collection environment is solved.

Description

Face recognition method and device, processor and electronic equipment
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a face recognition method and device, a processor and electronic equipment.
Background
The biometric technology is currently the most applied identity recognition and authentication technology due to its advantages of safety, universality, easy maintenance, etc. As an important branch of the biometric technology, face recognition is increasingly used in practical applications due to its advantages of being natural, friendly, high in user acceptance, convenient to collect, and the like, and is one of the most difficult research topics in the biometric identification field and even the artificial intelligence field.
Since the face of a person is susceptible to factors such as expression, posture and illumination, the recognition performance is limited. The existing face recognition method generally aims at the condition that the matching and acquisition conditions of a user are ideal, but under the condition that the user is not ideal, such as the state that the user changes hair style, makes up, wears ornaments and the like, the face recognition method still has a difficult problem. Meanwhile, during actual data acquisition, the acquired face model often contains redundant information of non-face areas such as hair and ornaments, is influenced by expression changes and inevitably contains noise information, so that great interference is generated on feature extraction and recognition of the face in the later period. Under the circumstances, how to detect the face region quickly and accurately becomes a problem which needs to be solved urgently in face recognition research.
Aiming at the problem that the face recognition method in the related art is easily influenced by the acquisition environment, so that the accuracy of face recognition is low, an effective solution is not provided at present.
Disclosure of Invention
The present application mainly aims to provide a face recognition method and apparatus, a processor, and an electronic device, so as to solve the problem that the face recognition method in the related art is easily affected by the acquisition environment, and thus the accuracy of face recognition is low.
In order to achieve the above object, according to an aspect of the present application, a face recognition method is provided. The method comprises the following steps: acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image; performing face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard; carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image; and carrying out face recognition according to the first face feature vector to determine the identity information of the target object.
Further, performing face key point recognition on the first face image to obtain a second face image includes: detecting face key points of the first face image through an angular point detection algorithm to obtain position information of the face key points; and processing the first face image according to the position information of the key points of the face to obtain a second face image.
Further, processing the first face image according to the position information of the face key points to obtain the second face image includes: performing angle rotation on the first face image according to the position information of the key points of the face to obtain a processed first face image; and carrying out scaling adjustment on the processed first face image through a bilinear difference value to obtain a second face image.
Further, performing multi-scale LBP coding on the second facial image to obtain a first facial feature vector corresponding to the second facial image includes: partitioning the second face image to obtain a plurality of face image sub-regions; and carrying out multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image.
Further, performing multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image includes: setting a plurality of preset scales according to the pixel points of each face image subregion, and performing multi-scale LBP feature extraction on the face image subregion according to the preset scales to obtain the multi-scale LBP feature of each face image subregion; constructing a multi-scale LBP feature histogram of each face image subregion according to the multi-scale LBP feature of each face image subregion; and obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram of each face image subregion.
Further, obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram includes: splicing the multi-scale LBP characteristic histogram of each face image subregion to obtain a total LBP characteristic histogram; obtaining an initial face feature vector corresponding to the second face image according to the total LBP feature histogram; and carrying out normalization processing on the initial face feature vector to obtain the first face feature vector.
Further, performing face recognition according to the first face feature vector to determine the identity information of the target object includes: calculating the Euclidean distance between the first face feature vector and a second face feature vector of a face image in a target database; and determining the identity information of the target object from the target database according to the Euclidean distance.
In order to achieve the above object, according to another aspect of the present application, a face recognition apparatus is provided. The device includes: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image; the first identification unit is used for carrying out face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard; the encoding unit is used for carrying out multi-scale LBP encoding on the second face image so as to obtain a first face feature vector corresponding to the second face image; and the second identification unit is used for carrying out face identification according to the first face characteristic vector so as to determine the identity information of the target object.
Further, the first recognition unit includes: the detection subunit is used for carrying out face key point detection on the first face image through an angular point detection algorithm to obtain position information of the face key points; and the first processing subunit is used for processing the first face image according to the position information of the key points of the face to obtain the second face image.
Further, the processing subunit includes: the rotation module is used for performing angle rotation on the first face image according to the position information of the key points of the face to obtain a processed first face image; and the adjusting module is used for carrying out scaling adjustment on the processed first face image through a bilinear difference value so as to obtain the second face image.
Further, the encoding unit includes: the second processing subunit is used for carrying out partition processing on the second face image to obtain a plurality of face image sub-areas; and the coding subunit is used for carrying out multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image.
Further, the encoding sub-unit includes: the setting module is used for setting a plurality of preset scales according to the pixel points of each face image subregion, and performing multi-scale LBP feature extraction on the face image subregion according to the preset scales to obtain the multi-scale LBP feature of each face image subregion; the construction module is used for constructing a multi-scale LBP feature histogram of each face image subregion according to the multi-scale LBP feature of each face image subregion; and the first determining module is used for obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram of each face image subregion.
Further, the first determining module comprises: the splicing submodule is used for splicing the multi-scale LBP characteristic histogram of each face image subregion to obtain a total LBP characteristic histogram; the determining submodule is used for obtaining an initial face feature vector corresponding to the second face image according to the total LBP feature histogram; and the processing submodule is used for carrying out normalization processing on the initial face feature vector to obtain the first face feature vector.
Further, the second recognition unit includes: the calculation module is used for calculating the Euclidean distance between the first face feature vector and a second face feature vector of the face image in a target database; and the second determining module is used for determining the identity information of the target object from the target database according to the Euclidean distance.
In order to achieve the above object, according to an aspect of the present application, there is provided a processor for executing a program, wherein the program executes to execute the face recognition method according to any one of the above items.
To achieve the above object, according to one aspect of the present application, there is provided an electronic device including one or more processors and a memory, the memory being used for storing the one or more processors to implement the face recognition method according to any one of the above items.
By the application, the following steps are adopted: acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image; performing face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard; carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image; and performing face recognition according to the first face feature vector to determine the identity information of the target object, so that the problem that the accuracy of face recognition is low due to the fact that a face recognition method in the related art is easily influenced by an acquisition environment is solved. The method comprises the steps of taking an infrared face image as an assistant, identifying the face and the background of a visible light face image, identifying key points of the face to obtain a second face image, then carrying out multi-scale LBP coding on the second face image, and finally carrying out face identification according to a first face feature vector.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 is a flowchart of a face recognition method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of face and background separation provided in accordance with an embodiment of the present application;
fig. 3 is a schematic diagram of LBP coding provided in accordance with an embodiment of the present application;
fig. 4 is a first schematic diagram of multi-scale LBP coding provided in accordance with an embodiment of the present application;
fig. 5 is a multi-scale LBP feature histogram provided in accordance with an embodiment of the present application;
fig. 6 is a schematic diagram two of multi-scale LBP coding provided according to an embodiment of the present application;
FIG. 7 is a flow chart of an alternative face recognition method provided in accordance with an embodiment of the present application;
FIG. 8 is a schematic diagram of a face recognition apparatus provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of an alternative face recognition apparatus provided in accordance with an embodiment of the present application;
fig. 10 is a schematic diagram of an electronic device provided according to an embodiment of the application.
Detailed Description
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that relevant information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data that are authorized by the user or sufficiently authorized by various parties. For example, an interface is provided between the system and the relevant user or organization, before obtaining the relevant information, an obtaining request needs to be sent to the user or organization through the interface, and after receiving the consent information fed back by the user or organization, the relevant information is obtained.
The present invention is described below with reference to preferred implementation steps, and fig. 1 is a flowchart of a face recognition method provided in an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S101, acquiring a visible light face image of a target object and an infrared face image of the target object, and separating a face from a background of the visible light face image based on the infrared face image to obtain a first face image;
step S102, carrying out face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard;
step S103, carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image;
and step S104, carrying out face recognition according to the first face feature vector to determine the identity information of the target object.
Specifically, the cameras on the counter of the financial institution generally include a common camera and an infrared camera, and when a client transacts business, the client performs face recognition and verification operations. The face information extracted by a general camera is easily affected by illumination, decorations, and the like. The image extracted by the infrared camera is formed according to infrared radiation or heat, is related to the internal structure of a human body, has strong anti-interference capability, but has low resolution and fuzzy details and also has certain limitation. Therefore, the infrared image and the common image are combined, the advantages are complementary, and the complete face recognition is very necessary. According to the face recognition method, the characteristics of the infrared image are combined, the global and local characteristics of the image are richer, the extracted characteristics are more accurate, and the accuracy of face recognition is further improved.
Two images of a visible light face image of a target object and an infrared face image of the target object are collected in real time. The acquired face image may contain a relatively cluttered background, and for the accuracy of a subsequent result, certain preprocessing is firstly performed on the face image, and a key area is intercepted. It should be noted that both the visible light face image and the infrared face image are grayscale images.
The infrared face image obtained by the infrared camera is displayed according to the temperature, and the face area of the person is obviously distinguished relative to the temperature of the surrounding environment and the face ornaments. In combination with this feature and the global invariant feature of the infrared image, an infrared-assisted manner may be used to set an appropriate threshold (for example, the threshold may be set at about 120 gray values), separate the target face from the background, and determine the basic position of the target face. In an alternative embodiment, the following formula can be used for face and background separation:
Figure BDA0003923184970000061
wherein (x, y) is the coordinate of a pixel point, g (x, y) is the gray value corresponding to the pixel point, T is a threshold value, T 0 Is a face region, t 1 Is a background area. After processing, the main face area is intercepted, as shown in fig. 2. The visible light face image is subjected to face and background separation based on the infrared face image to obtain a first face image.
The temperature values of all positions of the human face obtained by infrared are different, and the physiological characteristics of the human face can be known as follows: the temperature values of the canthus, the nose wing and other areas are high, and the corresponding images are bright; the tissues of the nose, lips and other areas are rich, the temperature is low, and the image is dark. And acquiring the positions of key points such as eyes, a nose and the like according to the positions of the bright areas, the canthus, the alar part and the like of the infrared human face and the local stable characteristics of the bright areas, the canthus, the alar part and the like of the infrared human face. Namely, the above-mentioned face key point recognition is performed on the first face image to obtain a second face image. In order to keep all face images in a uniform standard, the face images also need to be standardized. Firstly, a standard human face is defined, the face of the standard human face is in a normal position, the connecting line of the two eyes is parallel to the horizontal line, and the centers of the nose and the mouth are positioned on the vertical bisector of the connecting line of the two eyes. The obtained second face image needs to satisfy the above-mentioned standard.
After the second face image is obtained, performing multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image. And calculating the similarity between the face to be recognized and the face in the data set according to the first face feature vector. And according to the given distance measurement, finding out the face closest to the characteristics of the face to be detected in the face data set, and taking the category of the face as the category of the face to be recognized, thereby achieving the purpose of face recognition.
In conclusion, in the scheme, the infrared face image is used as an auxiliary to perform face and background recognition on the visible light face image and recognize key points of the face to obtain a second face image, then the second face image is subjected to multi-scale LBP coding, and finally the face recognition is performed according to the first face feature vector.
In order to improve the accuracy of face key point recognition, in the face recognition method provided in the embodiment of the present application, the following steps are included to perform face key point recognition on a first face image to obtain a second face image: detecting face key points of the first face image through an angular point detection algorithm to obtain position information of the face key points; and processing the first face image according to the position information of the key points of the face to obtain a second face image.
Performing angle rotation on the first face image according to the position information of the key points of the face to obtain a processed first face image; and carrying out scaling adjustment on the processed first face image through the bilinear difference value to obtain a second face image.
Specifically, the corner points are detected by a corner point detection algorithm, that is, points of an area with large image brightness change are found, and local maximum values of the points are obtained to obtain positions of key points such as eyes and a nose. For example, the position information of the face key points can be represented by the following formula: f r =(x r ,y r ) R ∈ (Leye, re ye, nose). After the face key point information is obtained, the face image needs to be standardized so that the second face image is the face image meeting the preset standard.
Carry out the rotation of certain angle to first face image to the positional information of face key point, can just position pending people face, rotation angle is:
Figure BDA0003923184970000071
and performing angle rotation on the first face image through the rotation angle to obtain the processed first face image.
And scaling the processed first face image according to a certain proportion by using a bilinear interpolation value, and inserting or deleting pixel points into a proper position in the processed first face image, so that the size of the image to be processed is consistent with that of the standard image under the condition of reducing the image error as much as possible. In conclusion, the key point information of the face can be accurately identified through the steps, and the subsequent accurate extraction of the feature information of the face is facilitated.
In order to extract a feature value of a face more accurately, in the face recognition method provided in the embodiment of the present application, performing multi-scale LBP coding on a second face image to obtain a first face feature vector corresponding to the second face image includes: partitioning the second face image to obtain a plurality of face image sub-areas; and carrying out multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image.
When face recognition is performed using LBP, a histogram of an LBP feature map is generally used as a feature vector. The original LBP is marked as 1 by comparing the difference between a pixel point in the image and 8 adjacent pixel points in the 3 × 3 neighborhood with the central point, otherwise 0, and then combined into binary numbers according to a certain sequence, and the corresponding decimal number is the LBP characteristic value of the pixel point, the principle is shown in fig. 3 (1). The common LBP is calculated based on the difference between a single pixel point and a neighborhood point, and the difference of pixel values of a large area in some areas of the face, such as the nose, lips, etc., is almost constant, at this time, it can be considered that the pixel value of the central point in the 3 × 3 neighborhood is the same as the pixel value of the neighborhood point, and the corresponding feature value is 0, but this is obviously inconsistent with the actual feature value.
In order to extract the characteristic value of the face more accurately, the second face image is subjected to multi-scale LBP coding, the area in a concentrated comparison is divided from whole to local in a multi-scale mode from coarse to fine to obtain image sub-blocks with different scales, the image sub-blocks are used as a whole to calculate an average value, the average value and the average value of surrounding sub-blocks are subjected to LBP coding, and local details are further captured. The principle is shown in fig. 3 (2). And carrying out partition processing on the second face image to obtain a plurality of face image sub-regions, and then carrying out multi-scale LBP coding on different face image sub-regions to obtain a first face characteristic vector corresponding to the second face image.
In conclusion, by performing multi-scale LBP coding on the second face image, richer global and local features of the image can be obtained, so that the extracted features are more accurate, and the accuracy of face recognition is further improved.
How to perform multi-scale LBP coding on each face image subregion is crucial, therefore, in the face recognition method provided in the embodiment of the present application, performing multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to a second face image includes the following steps: setting a plurality of preset scales according to the pixel points of each face image subregion, and performing multi-scale LBP feature extraction on the face image subregions according to the plurality of preset scales to obtain the multi-scale LBP feature of each face image subregion; constructing a multi-scale LBP characteristic histogram of each face image subregion according to the multi-scale LBP characteristic of each face image subregion; and obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram of each face image subregion.
Obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram comprises: splicing the multi-scale LBP feature histograms of the sub-regions of each face image to obtain a total LBP feature histogram and an initial face feature vector corresponding to the second face image according to the total LBP feature histogram; and carrying out normalization processing on the initial face feature vector to obtain a first face feature vector.
Specifically, the face after the normalization processing can be divided into 5 × 3 × 2=30 face image sub-regions as shown in fig. 4 by combining the face features and the pixel points of the "three-family five-eye". The 30 images of the image sub-regions can be set according to the scale shown in fig. 4, and multi-scale LBP feature extraction is performed according to the scale shown in fig. 4, so as to obtain the multi-scale LBP feature of each face image sub-region. And combining the image sub-regions of all the regions with corresponding different scales to obtain a multi-scale LBP feature histogram of the image sub-regions, and finally obtaining the feature vector of each region of the human face.
In an alternative embodiment, two respective face images of two persons are obtained, and fig. 5 is a multi-scale LBP histogram corresponding to the same region, where (1), (2) are the same person, (3) and (4) are the other person. It can be seen that two multi-scale LBP histograms of the same region of the same person are relatively close, while the difference between different persons is large.
After obtaining the multi-scale LBP feature histograms of each face image subregion, the multi-scale LBP feature histograms of each face image subregion are connected in a certain order to form a large histogram (i.e., the above-mentioned total LBP feature histogram), such as the total LBP feature histogram shown in fig. 6, which is the feature vector of the complete face (i.e., the above-mentioned initial face feature vector). Because the number of different face points has difference, the initial face feature vector is normalized to make the extracted face feature vector between [0,1 ].
In conclusion, the characteristic value of the face can be more accurately extracted by carrying out multi-scale LBP coding on the face image, and the accuracy of face recognition is further improved.
In the face recognition method provided in the embodiment of the present application, performing face recognition according to the first face feature vector to determine the identity information of the target object includes: calculating the Euclidean distance between the first face feature vector and a second face feature vector of the face image in the target database; and determining the identity information of the target object from the target database according to the Euclidean distance.
Specifically, the similarity between the face to be recognized and the face in the data set is calculated by adopting the Euclidean distance. That is, the euclidean distance between the first face feature vector and the second face feature vector of the face image in the target database is calculated as described above. And according to the given distance measurement, finding out the face closest to the face characteristic to be detected in the face data set, and taking the category of the face as the category of the face to be recognized, thereby achieving the purpose of face recognition.
In an alternative embodiment, the flow chart shown in fig. 7 may be used to implement face recognition, specifically, a visible light image and an infrared image of a face are obtained according to a common camera and an infrared camera, respectively. Acquiring the position of a key area of the face by combining the characteristic that the difference between the face area of the infrared image and the surrounding environment is large; and combining the obvious characteristics of brightness distinction of the infrared image eye corner, the nose wing and other areas to obtain the positions of key points such as eyes, a nose and the like. And (3) image standardization processing, namely defining a standard face, wherein the size of the image of the standard face is fixed and the standard face is symmetrical left and right, a connecting line of the centers of the two eyes is parallel to a horizontal axis, and the center of the nose and the center of the mouth fall on a vertical bisector of the connecting line of the two eyes. And according to the positions of the key points of the face to be processed, certain rotation and scaling processing is carried out, so that the direction and the size are consistent with those of a standard face. And dividing the normalized image into 5 × 3 × 2=30 areas by combining with 'three five eyes', setting different scales for feature extraction according to different tissue distributions and influence degrees of the areas, and obtaining a multi-scale feature histogram of each area. And connecting the images into a large histogram, namely the characteristic histogram of the complete face. And calculating the similarity between the face to be recognized and the face in the data set by adopting the Euclidean distance. And according to the given distance measurement, finding out the face closest to the face characteristic to be detected in the face data set, and taking the category of the face as the category of the face to be recognized, thereby achieving the purpose of face recognition.
According to the face recognition method provided by the embodiment of the application, a first face image is obtained by obtaining a visible light face image of a target object and an infrared face image of the target object and separating a face from a background of the visible light face image based on the infrared face image; performing face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard; carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image; the face recognition is carried out according to the first face feature vector to determine the identity information of the target object, and the problem that the face recognition accuracy is low due to the fact that the face recognition method in the related technology is easily affected by the collection environment is solved. The method comprises the steps of taking an infrared face image as an assistant, identifying the face and the background of a visible light face image, identifying key points of the face to obtain a second face image, then carrying out multi-scale LBP coding on the second face image, and finally carrying out face identification according to a first face feature vector.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment of the present application further provides a face recognition apparatus, and it should be noted that the face recognition apparatus of the embodiment of the present application may be used to execute the face recognition method provided by the embodiment of the present application. The following describes a face recognition apparatus provided in an embodiment of the present application.
Fig. 8 is a schematic diagram of a face recognition apparatus according to an embodiment of the present application. As shown in fig. 8, the apparatus includes: an acquisition unit 801, a first recognition unit 802, an encoding unit 803, and a second recognition unit 804.
An acquiring unit 801, configured to acquire a visible light face image of a target object and an infrared face image of the target object, and perform face and background separation on the visible light face image based on the infrared face image to obtain a first face image;
a first identification unit 802, configured to perform face key point identification on a first face image to obtain a second face image, where the second face image is a face image meeting a preset standard;
an encoding unit 803, configured to perform multi-scale LBP encoding on the second face image to obtain a first face feature vector corresponding to the second face image;
the second identifying unit 804 is configured to perform face identification according to the first face feature vector to determine identity information of the target object.
In the face recognition device provided by the embodiment of the application, the acquisition unit 801 is used for acquiring the visible light face image of the target object and the infrared face image of the target object, and the visible light face image is subjected to face and background separation based on the infrared face image to obtain a first face image; a first identification unit 802, configured to perform face key point identification on a first face image to obtain a second face image, where the second face image is a face image meeting a preset standard; the encoding unit 803 performs multi-scale LBP encoding on the second face image to obtain a first face feature vector corresponding to the second face image; the second recognition unit 804 performs face recognition according to the first face feature vector to determine the identity information of the target object, thereby solving the problem that the face recognition method in the related art is easily affected by the acquisition environment, and further the accuracy of face recognition is low. The method comprises the steps of taking an infrared face image as an assistant, identifying the face and the background of a visible light face image, identifying key points of the face to obtain a second face image, then carrying out multi-scale LBP coding on the second face image, and finally carrying out face identification according to a first face feature vector.
Optionally, in the face recognition apparatus provided in the embodiment of the present application, the first recognition unit 802 includes: the detection subunit is used for carrying out face key point detection on the first face image through an angular point detection algorithm to obtain position information of the face key points; and the first processing subunit is used for processing the first face image according to the position information of the key points of the face to obtain a second face image.
Optionally, in the face recognition apparatus provided in the embodiment of the present application, the processing subunit includes: the rotation module is used for performing angle rotation on the first face image according to the position information of the key points of the face to obtain a processed first face image; and the adjusting module is used for carrying out scaling adjustment on the processed first face image through the bilinear difference value so as to obtain a second face image.
Optionally, in the face recognition apparatus provided in the embodiment of the present application, the encoding unit includes: the second processing subunit is used for carrying out partition processing on the second face image to obtain a plurality of face image sub-areas; and the coding subunit is used for carrying out multi-scale LBP coding on each face image subregion so as to obtain a first face feature vector corresponding to the second face image.
Optionally, in the face recognition apparatus provided in the embodiment of the present application, the encoding subunit includes: the setting module is used for setting a plurality of preset scales according to the pixel points of each face image subregion, and performing multi-scale LBP feature extraction on the face image subregions according to the plurality of preset scales to obtain the multi-scale LBP feature of each face image subregion; the construction module is used for constructing a multi-scale LBP feature histogram of each face image subregion according to the multi-scale LBP feature of each face image subregion; and the first determining module is used for obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram of each face image subregion.
Optionally, in the face recognition apparatus provided in the embodiment of the present application, the first determining module includes: the splicing submodule is used for splicing the multi-scale LBP characteristic histogram of each face image subregion to obtain a total LBP characteristic histogram; the determining submodule is used for obtaining an initial face feature vector corresponding to the second face image according to the total LBP feature histogram; and the processing submodule is used for carrying out normalization processing on the initial face feature vector to obtain a first face feature vector.
Optionally, in the face recognition apparatus provided in the embodiment of the present application, the second identifying unit 804 includes: the calculation module is used for calculating the Euclidean distance between the first face feature vector and a second face feature vector of the face image in the target database; and the second determining module is used for determining the identity information of the target object from the target database according to the Euclidean distance.
The face recognition device comprises a processor and a memory, wherein the acquisition unit 801, the first recognition unit 802, the coding unit 803, the second recognition unit 804 and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
In an alternative embodiment, the apparatus as shown in fig. 9 may be used to implement face recognition, and specifically, the apparatus includes an image obtaining unit, a key region and key point determining unit, an image normalization processing unit, a partition feature extracting unit, and a face recognition unit.
An image acquisition unit: and respectively obtaining a visible light image and an infrared image of the face according to the common camera and the infrared camera.
Key area and key point determination unit: acquiring the position of a key area of the face by combining the characteristic that the difference between the face area of the infrared image and the surrounding environment is large; and combining the obvious characteristics of brightness distinction of the infrared image eye corner, the nose wing and other areas to obtain the positions of key points such as eyes, a nose and the like.
An image normalization processing unit: a standard face is defined, the size of the image of the standard face is fixed and the standard face is symmetrical left and right, a connecting line of the centers of the two eyes is parallel to a horizontal axis, and the center of the nose and the center of the mouth are located on a vertical bisector of the connecting line of the two eyes. And according to the positions of the key points of the face to be processed, certain rotation and scaling processing is carried out, so that the direction and the size are consistent with those of a standard face.
A partition feature extraction unit: and dividing the normalized image into 5 × 3 × 2=30 areas by combining with 'three five eyes', setting different scales for feature extraction according to different tissue distributions and influence degrees of the areas, and obtaining a multi-scale feature histogram of each area. And connecting the images into a large histogram, namely the characteristic histogram of the complete face.
A face recognition unit: and calculating the similarity between the face to be recognized and the face in the data set by adopting the Euclidean distance. And according to the given distance measurement, finding out the face closest to the characteristics of the face to be detected in the face data set, and taking the category of the face as the category of the face to be recognized, thereby achieving the purpose of face recognition.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the human face recognition is realized by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), including at least one memory chip.
The embodiment of the invention provides a processor, which is used for running a program, wherein a face recognition method is executed when the program runs.
As shown in fig. 10, an embodiment of the present invention provides an electronic device, where the device includes a processor, a memory, and a program stored in the memory and executable on the processor, and the processor implements the following steps when executing the program: acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image; performing face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard; carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image; and carrying out face recognition according to the first face feature vector to determine the identity information of the target object.
Optionally, performing face key point recognition on the first face image to obtain a second face image includes: detecting face key points of the first face image through an angular point detection algorithm to obtain position information of the face key points; and processing the first face image according to the position information of the key points of the face to obtain a second face image.
Optionally, the processing the first face image according to the position information of the key points of the face to obtain a second face image includes: performing angle rotation on the first face image according to the position information of the key points of the face to obtain a processed first face image; and carrying out scaling adjustment on the processed first face image through the bilinear difference value to obtain a second face image.
Optionally, performing multi-scale LBP coding on the second facial image to obtain a first facial feature vector corresponding to the second facial image includes: partitioning the second face image to obtain a plurality of face image sub-areas; and carrying out multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image.
Optionally, performing multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image includes: setting a plurality of preset scales according to the pixel points of each face image subregion, and performing multi-scale LBP feature extraction on the face image subregions according to the plurality of preset scales to obtain the multi-scale LBP feature of each face image subregion; constructing a multi-scale LBP characteristic histogram of each face image subregion according to the multi-scale LBP characteristic of each face image subregion; and obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram of each face image subregion.
Optionally, obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram includes: splicing the multi-scale LBP characteristic histogram of each face image subregion to obtain a total LBP characteristic histogram; obtaining an initial face feature vector corresponding to the second face image according to the total LBP feature histogram; and carrying out normalization processing on the initial face feature vector to obtain a first face feature vector.
Optionally, performing face recognition according to the first face feature vector to determine the identity information of the target object includes: calculating the Euclidean distance between the first face feature vector and a second face feature vector of the face image in the target database; and determining the identity information of the target object from the target database according to the Euclidean distance.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application also provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image; performing face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard; carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image; and performing face recognition according to the first face feature vector to determine the identity information of the target object.
Optionally, performing face keypoint recognition on the first face image to obtain a second face image includes: detecting key points of the face of the first face image by an angular point detection algorithm to obtain position information of the key points of the face; and processing the first face image according to the position information of the key points of the face to obtain a second face image.
Optionally, the processing the first face image according to the position information of the key points of the face to obtain a second face image includes: performing angle rotation on the first face image according to the position information of the key points of the face to obtain a processed first face image; and carrying out scaling adjustment on the processed first face image through the bilinear difference value to obtain a second face image.
Optionally, performing multi-scale LBP coding on the second facial image to obtain a first facial feature vector corresponding to the second facial image includes: partitioning the second face image to obtain a plurality of face image sub-areas; and carrying out multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image.
Optionally, performing multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image includes: setting a plurality of preset scales according to the pixel points of each face image subregion, and performing multi-scale LBP feature extraction on the face image subregions according to the plurality of preset scales to obtain the multi-scale LBP feature of each face image subregion; constructing a multi-scale LBP characteristic histogram of each face image subregion according to the multi-scale LBP characteristic of each face image subregion; and obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram of each face image subregion.
Optionally, obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram includes: splicing the multi-scale LBP characteristic histogram of each face image subregion to obtain a total LBP characteristic histogram; obtaining an initial face feature vector corresponding to the second face image according to the total LBP feature histogram; and carrying out normalization processing on the initial face feature vector to obtain a first face feature vector.
Optionally, the performing face recognition according to the first face feature vector to determine the identity information of the target object includes: calculating the Euclidean distance between the first face feature vector and a second face feature vector of the face image in the target database; and determining the identity information of the target object from the target database according to the Euclidean distance.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A face recognition method, comprising:
acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image;
performing face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard;
carrying out multi-scale LBP coding on the second face image to obtain a first face feature vector corresponding to the second face image;
and carrying out face recognition according to the first face feature vector to determine the identity information of the target object.
2. The method of claim 1, wherein performing face keypoint recognition on the first face image to obtain a second face image comprises:
detecting face key points of the first face image through an angular point detection algorithm to obtain position information of the face key points;
and processing the first face image according to the position information of the key points of the face to obtain a second face image.
3. The method according to claim 2, wherein processing the first face image according to the position information of the face key points to obtain the second face image comprises:
performing angle rotation on the first face image according to the position information of the key points of the face to obtain a processed first face image;
and carrying out scaling adjustment on the processed first face image through a bilinear difference value to obtain a second face image.
4. The method of claim 1, wherein performing multi-scale LBP coding on the second facial image to obtain a first facial feature vector corresponding to the second facial image comprises:
carrying out partition processing on the second face image to obtain a plurality of face image sub-regions;
and carrying out multi-scale LBP coding on each face image subregion to obtain a first face feature vector corresponding to the second face image.
5. The method of claim 4, wherein performing multi-scale LBP coding on each sub-region of the face image to obtain the first face feature vector corresponding to the second face image comprises:
setting a plurality of preset scales according to the pixel points of each face image subregion, and performing multi-scale LBP feature extraction on the face image subregion according to the preset scales to obtain the multi-scale LBP feature of each face image subregion;
constructing a multi-scale LBP characteristic histogram of each face image subregion according to the multi-scale LBP characteristic of each face image subregion;
and obtaining a first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram of each face image subregion.
6. The method according to claim 5, wherein obtaining the first face feature vector corresponding to the second face image according to the multi-scale LBP feature histogram comprises:
splicing the multi-scale LBP characteristic histogram of each face image subregion to obtain a total LBP characteristic histogram;
obtaining an initial face feature vector corresponding to the second face image according to the total LBP feature histogram;
and carrying out normalization processing on the initial face feature vector to obtain the first face feature vector.
7. The method of claim 1, wherein performing face recognition according to the first face feature vector to determine the identity information of the target object comprises:
calculating the Euclidean distance between the first face feature vector and a second face feature vector of a face image in a target database;
and determining the identity information of the target object from the target database according to the Euclidean distance.
8. A face recognition apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a visible light face image of a target object and an infrared face image of the target object, and performing face and background separation on the visible light face image based on the infrared face image to obtain a first face image;
the first identification unit is used for carrying out face key point identification on the first face image to obtain a second face image, wherein the second face image is a face image meeting a preset standard;
the encoding unit is used for carrying out multi-scale LBP encoding on the second face image to obtain a first face feature vector corresponding to the second face image;
and the second identification unit is used for carrying out face identification according to the first face characteristic vector so as to determine the identity information of the target object.
9. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the face recognition method according to any one of claims 1 to 7 when running.
10. An electronic device, comprising one or more processors and memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the face recognition method of any of claims 1-7.
CN202211362522.7A 2022-11-02 2022-11-02 Face recognition method and device, processor and electronic equipment Pending CN115601817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211362522.7A CN115601817A (en) 2022-11-02 2022-11-02 Face recognition method and device, processor and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211362522.7A CN115601817A (en) 2022-11-02 2022-11-02 Face recognition method and device, processor and electronic equipment

Publications (1)

Publication Number Publication Date
CN115601817A true CN115601817A (en) 2023-01-13

Family

ID=84850323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211362522.7A Pending CN115601817A (en) 2022-11-02 2022-11-02 Face recognition method and device, processor and electronic equipment

Country Status (1)

Country Link
CN (1) CN115601817A (en)

Similar Documents

Publication Publication Date Title
TW202006602A (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
JP6544900B2 (en) Object identification device, object identification method and program
RU2670798C1 (en) Method of iris authentication of user and device therefor
US8620036B2 (en) System and method for controlling image quality
CN106778453B (en) Method and device for detecting glasses wearing in face image
US20120140091A1 (en) Method and apparatus for recognizing a protrusion on a face
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
US10915739B2 (en) Face recognition device, face recognition method, and computer readable storage medium
Oldal et al. Hand geometry and palmprint-based authentication using image processing
Russ et al. 3D facial recognition: a quantitative analysis
Ocampo-Vega et al. Image processing for automatic reading of electro-mechanical utility meters
Barbosa et al. Transient biometrics using finger nails
Stein et al. A new method for combined face detection and identification using interest point descriptors
Zhang et al. Face recognition using SIFT features under 3D meshes
CN108875472B (en) Image acquisition device and face identity verification method based on image acquisition device
CN115601817A (en) Face recognition method and device, processor and electronic equipment
Amjed et al. Noncircular iris segmentation based on weighted adaptive hough transform using smartphone database
CN114627542A (en) Eye movement position determination method and device, storage medium and electronic equipment
El-Hallaq A proposed template image matching algorithm for face recognition
KR20200119586A (en) Identification system and method based on ear recognition
Takacs et al. Fast searching of digital face libraries using binary image metrics
Amali et al. Evolution of Deep Learning for Biometric Identification and Recognition
Jang et al. Exposed body component-based harmful image detection in ubiquitous sensor data
Salánki et al. NI LabVIEW based camera system used for gait detection
Sreecholpech et al. A robust model-based iris segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination