CN112818938B - Face recognition algorithm and face recognition device for self-adaptive illumination interference environment - Google Patents

Face recognition algorithm and face recognition device for self-adaptive illumination interference environment Download PDF

Info

Publication number
CN112818938B
CN112818938B CN202110233058.0A CN202110233058A CN112818938B CN 112818938 B CN112818938 B CN 112818938B CN 202110233058 A CN202110233058 A CN 202110233058A CN 112818938 B CN112818938 B CN 112818938B
Authority
CN
China
Prior art keywords
face
lbp
training
face recognition
dis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110233058.0A
Other languages
Chinese (zh)
Other versions
CN112818938A (en
Inventor
杨在野
葛微
范彩霞
张政
詹伟达
郝子强
唐雁峰
嵇晓强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202110233058.0A priority Critical patent/CN112818938B/en
Publication of CN112818938A publication Critical patent/CN112818938A/en
Application granted granted Critical
Publication of CN112818938B publication Critical patent/CN112818938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The invention relates to a face recognition algorithm and a face recognition device of a self-adaptive illumination interference environment, wherein the face recognition algorithm comprises a step of establishing a face data vector set and a step of face recognition, in the step of face recognition, MTCNN is utilized to detect a to-be-recognized portrait photo, the obtained face picture is respectively input into a FaceNet model and an LBP model to obtain corresponding vectors, euclidean distance between the vectors and each vector in the face data vector set is calculated, weighted summation calculation is carried out according to the Euclidean distance, and a training portrait photo corresponding to the minimum value of the weighted summation is used as a face recognition result of the to-be-recognized portrait photo. The invention adjusts the weight of the two algorithms in a self-adaptive way to realize self-adaptive environment illumination, thereby improving the robustness of the neural network to illumination in the face recognition and the face recognition rate in the illumination interference environment.

Description

Face recognition algorithm and face recognition device for self-adaptive illumination interference environment
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition algorithm and a face recognition device for a self-adaptive illumination interference environment.
Background
The face recognition technology has made a long progress in the past decades. Face recognition techniques can be classified into conventional methods (LDA, PCA, LBP, gabor filtering, etc.) and deep learning methods (MobileNet, faceNet, etc.). The traditional method has higher recognition speed, and the deep learning method has higher precision. Face recognition technology has been applied in many fields such as man-machine interaction, video monitoring, camera beautification, etc.
Because the traditional method and the deep learning method of face recognition cannot adapt to the ambient illumination, in some special cases, such as too strong or insufficient face illumination, the accuracy of face recognition still has a large improvement space.
Disclosure of Invention
In order to solve the problems that the existing face recognition technology is low in adaptability to illumination interference environments and low in recognition rate, a face recognition algorithm and a face recognition device of a self-adaptive illumination interference environment are provided, and the face recognition algorithm and the face recognition device have better recognition rate under complex illumination conditions.
In order to solve the problems, the invention adopts the following technical scheme:
a face recognition algorithm of a self-adaptive illumination interference environment comprises the following steps:
step one: establishing a face data vector set
Inputting all training portrait photos in a portrait photo training set into an MTCNN, and detecting the face position and the face key point of each input training portrait photo by using a face feature classifier to obtain corresponding training face detection frame coordinates;
intercepting corresponding training face photos according to the training face detection frame coordinates to obtain corresponding training face photos;
inputting all the training face pictures into a faceNet model and an LBP model respectively to obtain a corresponding face data vector set LIB fn And face data vector set LIB lbp
Step two: face recognition
Inputting a to-be-identified portrait photo into the MTCNN, and detecting a face position and a face key point of the input to-be-identified portrait photo by using a face feature classifier to obtain a to-be-identified face detection frame coordinate and a key point coordinate;
intercepting the face photo to be recognized according to the face detection frame coordinates to be recognized to obtain a face picture to be recognized;
inputting the face picture to be recognized into a faceNet model and an LBP model respectively to obtain corresponding vectors
Figure GDA0004211790220000021
Sum vector->
Figure GDA0004211790220000022
Respectively calculating vectors
Figure GDA0004211790220000023
LIB with face data vector set fn Euclidean distance dis of each vector in (a) FN Vector->
Figure GDA0004211790220000024
LIB with face data vector set lbp Euclidean distance dis of each vector in (a) LBP Then, for each training portrait photo, the corresponding Euclidean distance dis FN And European distance dis LBP And (4) carrying out weighted summation calculation, wherein the calculation formula is as follows:
dis=α FN ×dis FNLBP ×dis LBP
wherein alpha is FN Is European distance dis FN Weights, alpha LBP Is European distance dis LBP And 0+.alpha ∈α) FN ≦1,0≦α LBP ≦1,α FNLBP =1;
And taking the training portrait photo corresponding to the minimum dis value as a face recognition result of the portrait photo to be recognized.
Correspondingly, the invention also provides a face recognition device for self-adapting to the illumination interference environment, which comprises:
the training face detection module is used for inputting all training face photos in the face photo training set into the MTCNN, and the MTCNN detects face positions and face key points of each input training face photo by using a face feature classifier to obtain corresponding training face detection frame coordinates;
the training face picture intercepting module is used for intercepting corresponding training face pictures according to the training face detection frame coordinates to obtain corresponding training face pictures;
the data set building module is used for respectively inputting all the training face pictures into a faceNet model and an LBP model to obtain a corresponding face data vector set LIB fn And face data vector set LIB lbp
The system comprises a face detection module to be identified, a face detection module and a face feature classifier, wherein the face detection module is used for inputting a face photo to be identified into the MTCNN, and the MTCNN utilizes the face feature classifier to detect the face position and the face key point of the input face photo to be identified to obtain the coordinates of a face detection frame to be identified and the coordinates of key points;
the face picture intercepting module is used for intercepting the face picture to be recognized according to the face detection frame coordinates to be recognized to obtain a face picture to be recognized;
the vector module is used for respectively inputting the face picture to be recognized into a FaceNet model and an LBP model to obtain corresponding vectors
Figure GDA0004211790220000041
Sum vector->
Figure GDA0004211790220000042
Calculation modules for calculating vectors respectively
Figure GDA0004211790220000043
LIB with face data vector set fn Euclidean distance dis of each vector in (a) FN Vector->
Figure GDA0004211790220000044
LIB with face data vector set lbp Euclidean distance dis of each vector in (a) LBP Then, for each training portrait photo, the corresponding Euclidean distance dis FN And European distance dis LBP And (4) carrying out weighted summation calculation, wherein the calculation formula is as follows:
dis=α FN ×dis FNLBP ×dis LBP
wherein alpha is FN Is European distance dis FN Weights, alpha LBP Is European distance dis LBP And 0+.alpha ∈α) FN ≦1,0≦α LBP ≦1,α FNLBP =1;
And the recognition result module is used for taking the training portrait photo corresponding to the minimum dis value as the face recognition result of the portrait photo to be recognized.
Compared with the prior art, the invention has the following beneficial effects:
the face recognition algorithm and the face recognition device for the self-adaptive illumination interference environment provided by the invention utilize the LBP algorithm to be connected with the FaceNet algorithm in parallel to complete the establishment of a face data vector set and face recognition, and the weights of the two algorithms are self-adaptively adjusted in the face recognition process, so that the face recognition algorithm and the face recognition device are self-adaptive to the environment illumination, the robustness of the neural network to the illumination in the face recognition process is improved, and the face recognition rate in the illumination interference environment is improved.
Drawings
FIG. 1 is a flowchart of a face recognition algorithm for an adaptive illumination interference environment in an embodiment of the present invention;
FIG. 2 shows the brightness L and the weight alpha in the embodiment of the invention LBP Is a schematic diagram of the relationship of (2).
Detailed Description
The neural network has good precision in face recognition application. LBP is an image processing method that has great advantages in reducing the effects of illumination. The invention utilizes LBP algorithm to connect neural network algorithm in parallel, weights the two results to improve the robustness of the neural network to illumination in face recognition. The technical scheme of the invention will be described in detail below with reference to the accompanying drawings and preferred embodiments.
In one embodiment, as shown in fig. 1, the present invention provides a face recognition algorithm for an adaptive illumination interference environment, which specifically includes the following steps:
step one: establishing a face data vector set
The face detection algorithm in the embodiment can adopt a Multi-task convolution network (Multi-task Cascaded Convolutional Networks, MTCNN) or OpenCV, and the MTCNN or OpenCV can realize the detection of the face position and the face key points by utilizing a self-contained face feature classifier.
Inputting all training portrait photos in the portrait photo training set into the MTCNN, and detecting the face position and the face key point of each input training portrait photo by the MTCNN through a face feature classifier to obtain corresponding training face detection frame coordinates. The MTCNN comprises three neural network structures, namely P-Net, R-Net and O-Net, wherein the P-Net acquires a square frame containing a human face, and removes redundant frames through a non-maximum suppression algorithm, so that a plurality of human face detection candidate frames are preliminarily obtained; R-Net further refines the coordinates of the face detection frame, redundant frames are removed through NMS algorithm, and the face detection frame obtained at the moment is more accurate and has fewer redundant frames; on one hand, the O-Net further refines the coordinates of the face detection frame, and on the other hand, the coordinates of 5 key points (left eye, right eye, nose, left mouth angle and right mouth angle) of the face are output.
When all training portrait photos in the portrait photo training set are input into the OpenCV, the OpenCV also has library functions for face detection and facial organ recognition, so as to obtain training face detection frame coordinates corresponding to each training portrait photo.
After the training face detection frame coordinates are obtained, corresponding training face photos are intercepted according to the training face detection frame coordinates, and corresponding training face pictures are obtained.
After each training portrait photo is intercepted, all training face pictures corresponding to all portrait photos are obtained, and then all training face pictures are respectively input into a FaceNet model and an LBP model. The FaceNet model in the embodiment can also be replaced by other face recognition neural network algorithms, that is, the LBP algorithm can also be combined with other face recognition neural network algorithms to achieve face recognition under the illumination interference environment.
The FaceNet model is a neural network structure, and can convert a training face picture into a 128-dimensional vector, so that after all training face pictures are input into the FaceNet model, a face data vector set LIB can be obtained fn Face data vector set LIB fn The dimensions of the intermediate vectors are 128 dimensions.
The LBP model equally divides the training face picture into 7 x 4 regions, each region calculated as a histogram of length 59. The histogram of all the areas is serially developed into a 1652 histogram, the histogram can be represented by a 1652-dimensional vector, the 1652-dimensional vector can be reduced to 128-dimensional by an LDA algorithm, and therefore, after all the training face pictures are input into an LBP model, a face data vector set LIB can be obtained lbp Face data vector set LIB lbp The dimensions of the intermediate vectors are also 128 dimensions.
Step two: face recognition
Inputting the to-be-identified portrait photo into an MTCNN, and detecting a face position and a face key point of the input to-be-identified portrait photo by using a face feature classifier to obtain to-be-identified face detection frame coordinates and key point coordinates, wherein the key point coordinates comprise a left eye coordinate, a right eye coordinate, a left mouth corner coordinate and a right mouth corner coordinate.
After the face detection frame coordinates to be recognized are obtained, the face picture to be recognized is obtained by intercepting the face picture to be recognized according to the face detection frame coordinates to be recognized.
The face picture to be identified is also respectively input into a FaceNet model and an LBP model to obtain corresponding vectors
Figure GDA0004211790220000071
Sum vector->
Figure GDA0004211790220000072
Vector->
Figure GDA0004211790220000073
Sum vector->
Figure GDA0004211790220000074
Is 128 dimensions.
Next, the vectors are calculated separately
Figure GDA0004211790220000075
LIB with face data vector set fn Euclidean distance dis of each vector in (a) FN Calculate vector +.>
Figure GDA0004211790220000076
LIB with face data vector set lbp Euclidean distance dis of each vector in (a) LBP The calculation formula is as follows:
Figure GDA0004211790220000077
Figure GDA0004211790220000078
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure GDA0004211790220000079
LIB representing a set of face data vectors fn Vector in>
Figure GDA00042117902200000710
LIB representing a set of face data vectors lbp Is included in the vector.
Then, the Euclidean distance dis corresponding to each training portrait photo FN And European distance dis LBP Performing weighted summation calculation and weightingThe calculation formula of the summation is as follows:
dis=α FN ×dis FNLBP ×dis LBP
wherein alpha is FN Is European distance dis FN Weights, alpha LBP Is European distance dis LBP And alpha is the weight of FNLBP =1;
The smaller the dis value of the two face photos is, the larger the probability that the two faces come from the same person is, so that the training portrait photo corresponding to the minimum dis value can be used as the face recognition result of the portrait photo to be recognized.
Experiments of the invention show that the addition of the LBP model with a certain weight is obvious in optimizing the face recognition accuracy in the illumination interference environment, and the face recognition in the non-illumination interference environment is not optimized and even can compromise the recognition accuracy of the faceNet. When the model is tested on a fixed data set, the weight alpha with the highest identification accuracy can be found through a large number of experiments FN And weight alpha LBP However, when the algorithm is applied to daily life, it is difficult to find such a set of optimal values due to environmental changes. Therefore, the invention provides a method for automatically adjusting the weight alpha according to the brightness of the human face FN And weight alpha LBP Is a method of (2).
On the basis, in order to remove the influence of hair parts, the invention only intercepts the central area of the face through the coordinates of eyes and mouth corners, and calculates the brightness of the central area. The brightness is the perception of the eyes to the brightness of the light source and the object surface, and is mainly a visual experience determined by the intensity of light, so that the interference degree of ambient illumination on face recognition can be judged through the brightness. The invention adaptively adjusts the weight of two algorithms, namely a FaceNet model and an LBP model according to brightness, and specifically determines the weight alpha by the following steps of FN And weight alpha LBP Is the value of (1):
and intercepting the central area of the face picture to be identified according to the four key point coordinates of the left eye coordinate, the right eye coordinate, the left mouth corner coordinate and the right mouth corner coordinate. After the central area is intercepted, the brightness L of the central area is calculated, and the calculation formula of the brightness L is as follows:
Figure GDA0004211790220000091
wherein R, G, B is the gray scale of each pixel, i.e. red, green and blue, the range of L is 0-1, l=0 represents black, and l=1 represents white. Adjusting the weight alpha according to the brightness L LBP The method comprises the following steps: when the brightness L of the central area is too high or too low, the value of the weight alpha LBP is increased; when the brightness L of the central area is moderate, the weight alpha is reduced LBP Even let the weight alpha LBP Wherein the threshold value of the brightness L and the corresponding weight alpha are zero LBP The value of (2) may be set according to the actual situation, for example, when the brightness L of the central region is 0.7 or less or 0.3 or less, the weight α LBP The value of (2) is 0.22 or more; when the brightness L of the center region is greater than 0.3 and less than 0.7, the weight α LBP The value of (2) is less than 0.22. The embodiment provides a brightness L and a corresponding weight alpha LBP As shown in Table 1, FIG. 2 shows the brightness L and the weight alpha corresponding to Table 1 LBP Is a schematic diagram of the relationship of (2).
TABLE 1
Figure GDA0004211790220000092
Figure GDA0004211790220000101
After the face recognition algorithm provided by the embodiment is applied to a face recognition system, experimental results show that when the face of a person to be recognized is irradiated by strong light, the recognition rate of the algorithm is improved by about 5.6% compared with that of an MTCNN+FaceNet algorithm; when the identified person is in the dark environment, the identification rate of the algorithm is improved by about 4.2 percent compared with that of the MTCNN+FaceNet algorithm. Experimental results show that the face recognition algorithm of the embodiment has good improvement effect on face recognition in illumination interference environment.
The face recognition algorithm of the self-adaptive illumination interference environment provided by the invention utilizes the LBP algorithm and the FaceNet algorithm to complete the establishment of a face data vector set and face recognition, and the weights of the two algorithms are self-adaptively adjusted in the face recognition process, so that the self-adaptive environment illumination of the face recognition algorithm is realized, the illumination robustness of a neural network in face recognition is improved, and the face recognition rate in the illumination interference environment is improved.
In another embodiment, the present invention provides a face recognition device for adapting to an illumination interference environment, which specifically includes:
the training face detection module is configured to input all training face photos in the face detection algorithm to perform face detection after the face photo training set is pre-established, where the face detection algorithm in this embodiment may be based on a multitask convolutional network (Multi-task Cascaded Convolutional Networks, MTCNN) or OpenCV, and the MTCNN or OpenCV uses a self-contained face feature classifier to implement face position and face key point detection.
The training face detection module inputs all training face photos in the face photo training set into the MTCNN, and the MTCNN detects face positions and face key points of each input training face photo by using a face feature classifier to obtain corresponding training face detection frame coordinates. The MTCNN comprises three neural network structures, namely P-Net, R-Net and O-Net, wherein the P-Net acquires a square frame containing a human face, and removes redundant frames through a non-maximum suppression algorithm, so that a plurality of human face detection candidate frames are preliminarily obtained; R-Net further refines the coordinates of the face detection frame, redundant frames are removed through NMS algorithm, and the face detection frame obtained at the moment is more accurate and has fewer redundant frames; on one hand, the O-Net further refines the coordinates of the face detection frame, and on the other hand, the coordinates of 5 key points (left eye, right eye, nose, left mouth angle and right mouth angle) of the face are output.
When the training face detection module inputs all training face photos in the face photo training set into the OpenCV, the OpenCV also has a library function for face detection and face organ recognition, so as to obtain training face detection frame coordinates corresponding to each training face photo.
After the training face detection module obtains the training face detection frame coordinates, the training face picture intercepting module intercepts corresponding training face pictures according to the training face detection frame coordinates to obtain corresponding training face pictures.
After each training face picture is intercepted by the training face picture intercepting module, all training face pictures corresponding to all face pictures are obtained, and then all training face pictures are respectively input into a faceNet model and an LBP model by the data set establishing module. The FaceNet model in the embodiment can also be replaced by other face recognition neural network algorithms, that is, the LBP algorithm can also be combined with other face recognition neural network algorithms to achieve face recognition under the illumination interference environment.
The FaceNet model is a neural network structure, and can convert a training face picture into a 128-dimensional vector, so that the data set building module can obtain a face data vector set LIB after inputting all training face pictures into the FaceNet model fn Face data vector set LIB fn The dimensions of the intermediate vectors are 128 dimensions.
The LBP model equally divides the training face picture into 7 x 4 regions, each region calculated as a histogram of length 59. The histogram of all the areas is serially developed into a 1652 histogram, the histogram can be represented by a 1652-dimensional vector, the 1652-dimensional vector can be reduced to 128-dimensional by an LDA algorithm, and therefore, after the data set building module inputs all the training face pictures into an LBP model, a face data vector set LIB can be obtained lbp Face data vector set LIB lbp The dimensions of the intermediate vectors are also 128 dimensions.
The face detection module to be identified inputs the face photo to be identified into the MTCNN, and the MTCNN detects the face position and the face key point of the input face photo to be identified by utilizing the face feature classifier to obtain face detection frame coordinates and key point coordinates to be identified, wherein the key point coordinates comprise left eye coordinates, right eye coordinates, left mouth corner coordinates and right mouth corner coordinates.
After the face detection module to be identified obtains the face detection frame coordinates to be identified, the face picture interception module to be identified intercepts the face picture to be identified according to the face detection frame coordinates to be identified, and obtains the face picture to be identified.
The vector module inputs the face picture to be recognized into the faceNet model and the LBP model respectively to obtain corresponding vectors
Figure GDA0004211790220000131
Sum vector->
Figure GDA0004211790220000132
Vector->
Figure GDA0004211790220000133
Sum vector->
Figure GDA0004211790220000134
Is 128 dimensions.
Next, the calculation modules calculate vectors, respectively
Figure GDA0004211790220000135
LIB with face data vector set fn Euclidean distance dis of each vector in (a) FN Calculate vector +.>
Figure GDA0004211790220000136
LIB with face data vector set lbp Euclidean distance dis of each vector in (a) LBP The calculation formula is as follows:
Figure GDA0004211790220000137
Figure GDA0004211790220000138
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure GDA0004211790220000139
LIB representing a set of face data vectors fn Vector in>
Figure GDA00042117902200001310
LIB representing a set of face data vectors lbp Is included in the vector.
Then the calculation module calculates the Euclidean distance dis corresponding to each training portrait photo FN And European distance dis LBP And carrying out weighted summation calculation, wherein the calculation formula of the weighted summation is as follows:
dis=α FN ×dis FNLBP ×dis LBP
wherein alpha is FN Is European distance dis FN Weights, alpha LBP Is European distance dis LBP And alpha is the weight of FNLBP =1;
The smaller the dis value of the two face photos is, the larger the probability that the two faces come from the same person is, so that the recognition result module takes the training portrait photo corresponding to the minimum dis value as the face recognition result of the portrait photo to be recognized.
Experiments of the invention show that the addition of the LBP model with a certain weight is obvious in optimizing the face recognition accuracy in the illumination interference environment, and the face recognition in the non-illumination interference environment is not optimized and even can compromise the recognition accuracy of the faceNet. When the model is tested on a fixed data set, the weight alpha with the highest identification accuracy can be found through a large number of experiments FN And weight alpha LBP However, when the algorithm is applied to daily life, it is difficult to find such a set of optimal values due to environmental changes. Therefore, the invention provides a method for automatically adjusting the weight alpha according to the brightness of the human face FN And weight alpha LBP Is a method of (2).
On the basis, in order to remove the influence of hair parts, the invention only intercepts the central area of the face through the coordinates of eyes and mouth corners, and calculates the brightness of the central area. Brightness is the perception of the eye of the degree of darkness of the light source and the surface of the object, a vision determined primarily by the intensity of the lightAnd the degree of interference of the ambient illumination on the face recognition can be judged through the brightness. The invention adaptively adjusts the weights of two algorithms, namely a FaceNet model and an LBP model according to brightness, specifically determines the weight alpha through a weight determining module FN And weight alpha LBP Is the value of (1): the weight determining module is used for intercepting a central area of the face picture to be identified according to the four key point coordinates of the left eye coordinate, the right eye coordinate, the left mouth corner coordinate and the right mouth corner coordinate. After the central area is intercepted, the weight determining module calculates the brightness L of the central area, and the calculation formula of the brightness L is as follows:
Figure GDA0004211790220000141
wherein R, G, B is the gray scale of each pixel, i.e. red, green and blue, the range of L is 0-1, l=0 represents black, and l=1 represents white. The weight determining module adjusts the weight alpha according to the brightness L LBP The method comprises the following steps: when the brightness L of the central area is too high or too low, the weight alpha is increased LBP Is a value of (2); when the brightness L of the central area is moderate, the weight alpha is reduced LBP Even let the weight alpha LBP Wherein the threshold value of the brightness L and the corresponding weight alpha are zero LBP The value of (2) may be set according to the actual situation, for example, when the brightness L of the central region is 0.7 or less or 0.3 or less, the weight α LBP The value of (2) is 0.22 or more; when the brightness L of the center region is greater than 0.3 and less than 0.7, the weight α LBP The value of (2) is less than 0.22. The embodiment provides a brightness L and a corresponding weight alpha LBP As shown in Table 1, FIG. 2 shows the brightness L and the weight alpha corresponding to Table 1 LBP Is a schematic diagram of the relationship of (2).
The face recognition device provided by the embodiment is applied to a face recognition system, and experimental results show that when the face of a person to be recognized is irradiated by strong light, the recognition rate of the device is improved by about 5.6% compared with that of a face recognition device using an MTCNN+FaceNet algorithm; when the identified person is in the dark environment, the identification rate of the device is improved by about 4.2 percent compared with the face recognition device using the MTCNN+FaceNet algorithm. Experimental results show that the face recognition device has good improvement effect on face recognition in illumination interference environment.
The face recognition device for the self-adaptive illumination interference environment provided by the invention utilizes the LBP algorithm and the FaceNet algorithm to complete the establishment of a face data vector set and face recognition, and the weights of the two algorithms are self-adaptively adjusted in the face recognition process, so that the self-adaptive environment illumination of the face recognition device is realized, the illumination robustness of a neural network in face recognition is improved, and the face recognition rate in the illumination interference environment is improved.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (6)

1. The face recognition algorithm of the self-adaptive illumination interference environment is characterized by comprising the following steps of:
step one: establishing a face data vector set
Inputting all training portrait photos in a portrait photo training set into an MTCNN, and detecting the face position and the face key point of each input training portrait photo by using a face feature classifier to obtain corresponding training face detection frame coordinates;
intercepting corresponding training face photos according to the training face detection frame coordinates to obtain corresponding training face photos;
inputting all the training face pictures into a faceNet model and an LBP model respectively to obtain a corresponding face data vector set LIB fn And face data vector set LIB lbp
Step two: face recognition
Inputting a to-be-identified portrait photo into the MTCNN, and detecting a face position and a face key point of the input to-be-identified portrait photo by using a face feature classifier to obtain a to-be-identified face detection frame coordinate and a key point coordinate, wherein the key point coordinate comprises a left eye coordinate, a right eye coordinate, a left mouth corner coordinate and a right mouth corner coordinate;
intercepting the face photo to be recognized according to the face detection frame coordinates to be recognized to obtain a face picture to be recognized;
inputting the face picture to be recognized into a faceNet model and an LBP model respectively to obtain corresponding vectors
Figure FDA0004211790210000011
Sum vector->
Figure FDA0004211790210000012
Respectively calculating vectors
Figure FDA0004211790210000013
LIB with face data vector set fn Euclidean distance dis of each vector in (a) FN Vector quantity
Figure FDA0004211790210000014
LIB with face data vector set lbp Euclidean distance dis of each vector in (a) LBP Then, for each training portrait photo, the corresponding Euclidean distance dis FN And European distance dis LBP And (4) carrying out weighted summation calculation, wherein the calculation formula is as follows:
dis=α FN ×dis FNLBP ×dis LBP
wherein alpha is FN Is European distance dis FN Weights, alpha LBP Is European distance dis LBP And 0+.alpha ∈α) FN ≦1,0≦α LBP ≦1,α FNLBP =1;
Weight alpha FN And weight alpha LBP The value of (2) is determined by: intercepting a central area of the face picture to be recognized according to the key point coordinates, and calculating brightness of the central area, wherein a calculation formula is as follows:
Figure FDA0004211790210000021
wherein R, G, B is the gray scale of red, green and blue pixels of each pixel point, the value range of L is 0-1, L=0 represents black, and L=1 represents white; when the brightness L of the central region is 0.7 or less or 0.3 or less, the weight alpha LBP The value of (2) is 0.22 or more; when the brightness L of the center region is greater than 0.3 and less than 0.7, the weight α LBP A value of less than 0.22;
and taking the training portrait photo corresponding to the minimum dis value as a face recognition result of the portrait photo to be recognized.
2. The face recognition algorithm for the adaptive illumination interference environment according to claim 1, wherein,
(Vector)
Figure FDA0004211790210000022
sum vector->
Figure FDA0004211790210000023
The dimension of the face data vector set LIB is 128 dimensions fn And face data vector set LIB lbp The dimension of the medium vector is 128 dimensions, and the human face data vector set LIB lbp The vector in (2) is reduced to 128 dimensions by an LDA algorithm.
3. The face recognition algorithm for the adaptive illumination interference environment according to claim 1, wherein,
the MTCNN was replaced with OpenCV.
4. The utility model provides a face identification device of self-adaptation illumination interference environment which characterized in that includes:
the training face detection module is used for inputting all training face photos in the face photo training set into the MTCNN, and the MTCNN detects face positions and face key points of each input training face photo by using a face feature classifier to obtain corresponding training face detection frame coordinates;
the training face picture intercepting module is used for intercepting corresponding training face pictures according to the training face detection frame coordinates to obtain corresponding training face pictures;
the data set building module is used for respectively inputting all the training face pictures into a faceNet model and an LBP model to obtain a corresponding face data vector set LIB fn And face data vector set LIB lbp
The system comprises a face detection module to be identified, a face feature classifier, a face detection module and a face feature detection module, wherein the face detection module is used for inputting a to-be-identified portrait photo into the MTCNN, the MTCNN detects the face position and the face key point of the input to-be-identified portrait photo by using the face feature classifier to obtain a face detection frame coordinate to be identified and a key point coordinate, and the key point coordinate comprises a left eye coordinate, a right eye coordinate, a left mouth corner coordinate and a right mouth corner coordinate;
the face picture intercepting module is used for intercepting the face picture to be recognized according to the face detection frame coordinates to be recognized to obtain a face picture to be recognized;
the vector module is used for respectively inputting the face picture to be recognized into a FaceNet model and an LBP model to obtain corresponding vectors
Figure FDA0004211790210000031
Sum vector->
Figure FDA0004211790210000032
Calculation modules for calculating vectors respectively
Figure FDA0004211790210000033
LIB with face data vector set fn Euclidean distance dis of each vector in (a) FN Vector->
Figure FDA0004211790210000034
LIB with face data vector set lbp Euclidean distance dis of each vector in (a) LBP Then, for each training portrait photo, the corresponding Euclidean distance dis FN And European distance dis LBP And (4) carrying out weighted summation calculation, wherein the calculation formula is as follows:
dis=α FN ×dis FNLBP ×dis LBP
wherein alpha is FN Is European distance dis FN Weights, alpha LBP Is European distance dis LBP And 0+.alpha ∈α) FN ≦1,0≦α LBP ≦1,α FNLBP =1;
The weight determining module is used for intercepting a central area of the face picture to be recognized according to the key point coordinates, calculating the brightness of the central area, and the calculation formula is as follows:
Figure FDA0004211790210000041
wherein R, G, B is the gray scale of red, green and blue pixels of each pixel point, the value range of L is 0-1, L=0 represents black, and L=1 represents white; when the brightness L of the central region is 0.7 or less or 0.3 or less, the weight alpha LBP The value of (2) is 0.22 or more; when the brightness L of the center region is greater than 0.3 and less than 0.7, the weight α LBP A value of less than 0.22;
and the recognition result module is used for taking the training portrait photo corresponding to the minimum dis value as the face recognition result of the portrait photo to be recognized.
5. The apparatus for face recognition in an adaptive light interference environment of claim 4,
(Vector)
Figure FDA0004211790210000042
sum vector->
Figure FDA0004211790210000043
The dimension of the face data vector set LIB is 128 dimensions fn And face data vector set LIB lbp The dimension of the medium vector is 128 dimensions, and the human face data vector set LIB lbp The vector in (2) is reduced to 128 dimensions by an LDA algorithm.
6. The apparatus for face recognition in an adaptive light interference environment of claim 4,
the MTCNN was replaced with OpenCV.
CN202110233058.0A 2021-03-03 2021-03-03 Face recognition algorithm and face recognition device for self-adaptive illumination interference environment Active CN112818938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110233058.0A CN112818938B (en) 2021-03-03 2021-03-03 Face recognition algorithm and face recognition device for self-adaptive illumination interference environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110233058.0A CN112818938B (en) 2021-03-03 2021-03-03 Face recognition algorithm and face recognition device for self-adaptive illumination interference environment

Publications (2)

Publication Number Publication Date
CN112818938A CN112818938A (en) 2021-05-18
CN112818938B true CN112818938B (en) 2023-06-16

Family

ID=75862636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110233058.0A Active CN112818938B (en) 2021-03-03 2021-03-03 Face recognition algorithm and face recognition device for self-adaptive illumination interference environment

Country Status (1)

Country Link
CN (1) CN112818938B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241588B (en) * 2022-02-24 2022-05-20 北京锐融天下科技股份有限公司 Self-adaptive face comparison method and system
CN117351540B (en) * 2023-09-27 2024-04-02 东莞莱姆森科技建材有限公司 Bathroom mirror integrated with LED and head action recognition

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102648770B1 (en) * 2016-07-14 2024-03-15 매직 립, 인코포레이티드 Deep neural network for iris identification
CN110858275A (en) * 2018-08-22 2020-03-03 北京航天长峰科技工业集团有限公司 Method for carrying out facial recognition by using identity card photo
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point
CN111652082B (en) * 2020-05-13 2021-12-28 北京的卢深视科技有限公司 Face living body detection method and device
CN111898454A (en) * 2020-07-02 2020-11-06 中国地质大学(武汉) Weight binarization neural network and transfer learning human eye state detection method and device

Also Published As

Publication number Publication date
CN112818938A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN110991299B (en) Confrontation sample generation method aiming at face recognition system in physical domain
EP3937481A1 (en) Image display method and device
JP4708909B2 (en) Method, apparatus and program for detecting object of digital image
EP3477931A1 (en) Image processing method and device, readable storage medium and electronic device
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN112818938B (en) Face recognition algorithm and face recognition device for self-adaptive illumination interference environment
CN103020992B (en) A kind of video image conspicuousness detection method based on motion color-associations
CN112541422B (en) Expression recognition method, device and storage medium with robust illumination and head posture
CN106599863A (en) Deep face identification method based on transfer learning technology
US20100220925A1 (en) Detecting method and detecting system for positions of face parts
CN108416291B (en) Face detection and recognition method, device and system
CN113592911B (en) Apparent enhanced depth target tracking method
Huang et al. Learning-based Face Detection by Adaptive Switching of Skin Color Models and AdaBoost under Varying Illumination.
CN106881716A (en) Human body follower method and system based on 3D cameras robot
CN113222973B (en) Image processing method and device, processor, electronic equipment and storage medium
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN112232204A (en) Living body detection method based on infrared image
JP2011081803A (en) Red-eye object classification candidate, computer-readable medium, and image processing apparatus
CN110009708B (en) Color development transformation method, system and terminal based on image color segmentation
CN112232205B (en) Mobile terminal CPU real-time multifunctional face detection method
Hashem Adaptive technique for human face detection using HSV color space and neural networks
US11082613B2 (en) Image adjusting method and image adjusting device
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN109657544B (en) Face detection method and device
Chen et al. A real-time face detection and recognition system for a mobile robot in a complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant