CN112199986A - Face image recognition method based on local binary pattern multi-distance learning - Google Patents

Face image recognition method based on local binary pattern multi-distance learning Download PDF

Info

Publication number
CN112199986A
CN112199986A CN202010843287.XA CN202010843287A CN112199986A CN 112199986 A CN112199986 A CN 112199986A CN 202010843287 A CN202010843287 A CN 202010843287A CN 112199986 A CN112199986 A CN 112199986A
Authority
CN
China
Prior art keywords
distance
image
binary pattern
follows
vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010843287.XA
Other languages
Chinese (zh)
Inventor
廖开阳
秦源苑
章明珠
曹从军
郑元林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202010843287.XA priority Critical patent/CN112199986A/en
Publication of CN112199986A publication Critical patent/CN112199986A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The invention discloses a face image recognition method based on local binary pattern multi-distance learning, which is implemented according to the following steps: 1. preprocessing an original image; 2. extracting important local features in the image by using a local binary pattern, and converting the important local features into global description by combining a histogram method; 3. acquiring a training set of T-data collected based on the correlation between the new representations of all the images; 4. putting the T-data training set obtained in the step (3) into a back propagation neural network, setting parameters of the back propagation neural network, and then starting training; 5. and (3) applying the steps 1 to 3 to each test image, feeding the test image data into the back propagation neural network trained in the step 4 for recognition, and obtaining output. The invention can integrate the advantages of the existing five distance formulas, further improve the precision and the efficiency and realize the accurate retrieval of the face recognition in large-scale data.

Description

Face image recognition method based on local binary pattern multi-distance learning
Technical Field
The invention belongs to the technical field of face image recognition, and relates to a face image recognition method based on local binary pattern multi-distance learning.
Background
In the modern society, due to the perfection of a management system, the development of a computer and the expectation of people on office intellectualization and automation, the traditional attendance mode can not meet the requirements of people, and a new attendance system based on biological characteristic identification is paid more and more attention to and is used. The novel attendance system plays an irreplaceable important role in the aspect of modern society as little as the card punching of students on class and staff on duty.
Traditional attendance modes, such as manual form recording, photoelectric card insertion, bar code card insertion type recording modes, RFID card attendance acquisition through a terminal card reader, mobile phone APP positioning card punching and the like have various problems, such as attendance fraud, illegal card use and the like, easily cause errors in recording, not only consume a large amount of manpower, material resources and financial resources, but also cannot guarantee to obtain a result with high accuracy. In order to ensure the accuracy and convenience of attendance checking, more and more researchers aim at the characteristics of people. Such self-characteristics are unique, distinctive and general, such as a person's iris, fingerprint, voice, face, etc. The iris recognition technology has the advantages of uniqueness, stability and the like, but in practical application, only one person can be recognized at a time, the use efficiency is low, the face needs to be close to a recognition instrument, and discomfort is easily caused; although the biological voice recognition technology is very convenient for people who are difficult to identify by other self characteristics, the biological voice recognition technology has the defects that the background noise is low, and only one person can be identified at one time, once the background sound is too high or the identified person is hoarse due to diseases and other reasons, or even cannot say, the detection method has great errors, and even if the person has a sound-changing period, the sound changes along with the growth of the age, so that the technology cannot be widely applied to practice.
Although the accuracy of the face recognition technology is slightly lower than that of the iris recognition technology and the fingerprint recognition technology at present, the face recognition technology is very easy to accept by people due to the characteristics of non-close-range contact, non-invasiveness and the like, and is called as the most friendly biological feature recognition technology. (Chellappa R, Wilson C L, department S. human and machine recognition of faces: A surveiy [ J ] Proceeding of the IEE,1995,83(5):705-741) it captures the facial features of the person through the camera, transmits the data to the computer, after the computer carries out the steps of size modification, feature extraction, feature reduction, etc. on the data by using the preset program and algorithm, compares the data with the data in the prepared database, and finally feeds back a recognition result. This recognition detection process conforms to the human logic process, and the time and final accuracy required for face recognition have been greatly improved through the efforts of research personnel in recent years, but in practice, there are difficulties in the face recognition field that changes in, for example, human expressions, decorations, and postures lead to a reduction in recognition efficiency and accuracy.
Samarth Bharadwaj et al propose a directional optical flow histogram feature HOOF as a motion descriptor, and an LBP-TOP as a texture descriptor on the basis of LBP, then extract and fuse the two descriptors, and obtain a more objective result after sending the two descriptors into a classifier (Li X, Chen J, Zhao G, et al. Yousef Atoum et al propose a new loss function to supervise CNN instead of softmax loss function for the problem of overfitting softmax loss function, and first take Face depth map as the difference feature between living and non-living (Atoum Y, Liu Y, journal A, Liu X:faceanti-speech using patch and depth-based CNNs. IN: ICJB, IEEE (2017)).
At present, the neural network and big data become core strength in the technical field of face recognition, and the development trend of the technology mainly has three aspects: the deeper the network layer number is, the larger the network layer number is; increase of data amount can be recognized; and thirdly, the algorithm performance is more robust, but the accuracy and efficiency of the current face recognition technology are still very limited.
Disclosure of Invention
The invention aims to provide a face image identification method based on local binary pattern multi-distance learning, which solves the problems of low accuracy and efficiency in the prior art.
The invention adopts the technical scheme that a face image recognition method based on local binary pattern multi-distance learning is implemented according to the following steps:
step 1, preprocessing an original image;
step 2, extracting important local features in the image by using a local binary pattern, and converting the important local features into global description by combining a histogram method;
step 3, acquiring a T-data training set collected based on the correlation among the new representations of all the images;
step 4, putting the T-data training set obtained in the step 3 into a back propagation neural network, setting back propagation neural network parameters, and then starting training;
and 5, applying the steps 1 to 3 to each test image, feeding the test image data into the back propagation neural network trained in the step 4 for recognition, and obtaining output.
The invention is also characterized in that:
the step 1 specifically comprises resizing and cropping the image to eliminate the face background effect, performing gray processing on the image and using histogram equalization to establish a robust face recognition system, thereby reducing noise and illumination.
The step 2 is implemented according to the following steps:
step 2.1, dividing the preprocessed image into 5x5 cells;
step 2.2, performing threshing processing on the 3 multiplied by 3 neighborhood of each pixel by using a central value and regarding the result as a binary number, thereby applying an LBP method to the image pixels;
step 2.3, connecting the new cell descriptions divided in step 2.1 by using a histogram method to obtain a new representation of each image, i.e. an LBPH representation.
The mathematical formula of the LBP operator in step 2.2 is:
Figure BDA0002642200540000041
wherein x represents a central element whose pixel value is G (x), G (x)i) The pixel value representing the ith element in the field, s (x) is a sign function defined as follows:
Figure BDA0002642200540000042
the LBPH in step 2.3 is formulated as follows:
Figure BDA0002642200540000043
where P is the sample point, R is the radius, lbp (x) is the sign function, defined in equation (1), with i, j being the corresponding element coordinates.
Step 3 is specifically implemented according to the following steps:
3.1, calculating the distance between each image and other images by using five distance methods based on the LBPH representation of each image obtained in the step 2;
step 3.2, combine five distances using the square root of the sum of squares, as follows:
Figure BDA0002642200540000044
wherein DISiIn one of the distance methods, α is an intensity factor, and the intensity factor is specifically assigned as: mahalanobis and manhattan are both 0.3, kanperra is 0.2, and correlation and euclidean are both 0.1;
and 3.3, acquiring a training data set after the five distance methods and the RSS method are applied, namely a T-data training set.
The five distance methods are respectively as follows:
the mahalanobis distance method utilizes a covariance matrix between two vectors, which is expressed by the formula:
Figure BDA0002642200540000051
where a, b are the corresponding two vectors, S-1Is the inverse of the covariance matrix;
correlation distance classification, the measure of correlation is equal to zero and is sensitive to a linear relationship between two vectors, formulated as follows:
Figure BDA0002642200540000052
wherein a and b are two corresponding vectors, Cov is covariance, and σ a and σ b are standard deviations of a and b;
the euclidean distance method is the basis for many similar and dissimilar methods. The following formula is used to calculate the euclidean distance between corresponding elements of the two vector spaces:
Figure BDA0002642200540000053
wherein a and b are two corresponding vectors;
the Kanbera distance method is a numerical measure of the distance between two points in vector space, and is formulated as follows:
Figure BDA0002642200540000054
wherein a and b are two corresponding vectors;
the manhattan distance method is another method for measuring the distance between two vectors, and is formulated as follows:
Figure BDA0002642200540000061
where a, b are the corresponding two vectors.
Step 4 is specifically implemented according to the following steps:
step 4.1, setting the number of layers and the number of neurons;
step 4.2, setting iteration times;
4.3, setting a threshold value;
4.4, setting an input matrix and the expected output of the previous step;
step 4.5, randomly initializing the weight and the deviation;
and 4.6, making a training plan.
The invention has the beneficial effects that:
(1) LBPH is an improved local binary pattern that is suitable for feature extraction because it describes the texture and structure of an image, the present invention represents a face image and reduces the image size by applying the LBPH method;
(2) KNN is one of the methods used in computer vision, most of which use the distance in Europe and Turi, but it produces less accuracy than other methods, however, each distance method provides different levels of accuracy based on the problem domain, therefore, the M-KNN of the present invention considers five distance methods together to improve the accuracy of face recognition;
(3) after the BPNN is improved, the calculation gradient descent efficiency is higher, the classification effect is better, and the realization is easy, so the BPNN classification can be widely used for training a neural network.
Drawings
FIG. 1 is a flow chart of a face image recognition method based on local binary pattern multi-distance learning according to the present invention;
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses a face image recognition method based on local binary pattern multi-distance learning, which is implemented according to the following steps as shown in figure 1:
step 1, preprocessing an original image;
the method specifically comprises the steps of carrying out size adjustment and cutting on an image to eliminate a human face background effect, carrying out gray level processing on the image and establishing a robust human face recognition system by using histogram equalization so as to reduce noise and illumination;
step 2, extracting important local features in the image by using a local binary pattern, specifically using LBP8 and U22 descriptors, and converting the important local features into global descriptions by combining a histogram method, specifically according to the following steps:
step 2.1, dividing the preprocessed image into 5x5 cells; after trying different mesh sizes, we choose to divide the image into 25 small cells because the 5x5 mesh provides better performance in a reasonable time, smaller mesh sizes (e.g., 4x4) provide less functionality (4 x 59 — 944 features) than (5 x 59 — 1475 features), which results in reduced accuracy and reduced volume, and larger mesh sizes that might solve the problem of inadaptation in neural network training using LBPH descriptors, M-KNN and BPNN enhanced face recognition may provide more functionality, but this increases computation time with slightly increased accuracy;
step 2.2, performing threshing processing on the 3 multiplied by 3 neighborhood of each pixel by using a central value and regarding the result as a binary number, thereby applying an LBP method to the image pixels;
the mathematical formula of the LBP operator is:
Figure BDA0002642200540000081
wherein x represents a central element whose pixel value is G (x), G (x)i) The pixel value representing the ith element in the field, s (x) is a sign function defined as follows:
Figure BDA0002642200540000082
step 2.3, connecting the new cell descriptions divided in the step 2.1 by using a histogram method to obtain a new representation of each image, namely LBPH representation;
the present invention uses an improved LBPH operator called unified pattern, which is a bitwise number of transitions from 1 to 0 (and vice versa), which is called homogeneous if the homogeneity measure of the LBP operator is at most 2, e.g. patterns 11111111(0 transitions), 01111100(2 transitions) and 11000111(2 transitions) are homogeneous, while patterns 10001000(3 transitions) and 11010011 (4 transitions) are not homogeneous, for dimensionality reduction histograms are used to decimal down the image features from 256 dimensions to 59 dimensions, containing information about local patterns, using one separate bin for each homogeneous pattern, one for all non-homogeneous patterns, 58 single-form patterns in an 8-bit binary number; therefore, to use 58 bins, 1 bin for all non-uniform patterns, a global description of the face image is obtained by concatenating all local area images, and the total value of LBPH can be expressed as a histogram and formulated as follows:
Figure BDA0002642200540000083
where P is the sample point, R is the radius, lbp (x) is the sign function, defined in equation (1), with i, j being the corresponding element coordinates;
and 3, acquiring a T-data training set collected based on the correlation among the new representations of all the images, and specifically implementing the following steps:
3.1, calculating the distance between each image and other images by using five distance methods based on the LBPH representation of each image obtained in the step 2;
the five distance methods are respectively as follows:
the mahalanobis distance method utilizes a covariance matrix between two vectors, which is expressed by the formula:
Figure BDA0002642200540000091
where a, b are the corresponding two vectors, S-1Is the inverse of the covariance matrix;
correlation distance classification, the measure of correlation is equal to zero and is sensitive to a linear relationship between two vectors, formulated as follows:
Figure BDA0002642200540000092
wherein a and b are two corresponding vectors, Cov is covariance, and σ a and σ b are standard deviations of a and b;
the euclidean distance method is the basis for many similar and dissimilar methods. The following formula is used to calculate the euclidean distance between corresponding elements of the two vector spaces:
Figure BDA0002642200540000093
wherein a and b are two corresponding vectors;
the Kanbera distance method is a numerical measure of the distance between two points in vector space, and is formulated as follows:
Figure BDA0002642200540000094
wherein a and b are two corresponding vectors;
the manhattan distance method is another method for measuring the distance between two vectors, and is formulated as follows:
Figure BDA0002642200540000101
wherein a and b are two corresponding vectors;
step 3.2, five distances are combined by using the square root of the sum of squares, and each distance algorithm is superior to other algorithms in different dimensions, so that the accuracy in the final scheme is improved by adding an intensity factor alpha, which is specifically as follows:
Figure BDA0002642200540000102
wherein DISiIn one of the distance methods, α is an intensity factor,
experiments show that mahalanobis and manhattan distances are more advantageous than other distance methods, and therefore, the intensity factors are specifically assigned as follows: mahalanobis and manhattan are both 0.3, kanperra is 0.2, and correlation and euclidean are both 0.1;
step 3.3, acquiring a training data set which is a T-data training set after the five distance methods and the RSS method are applied;
step 4, putting the T-data training set obtained in the step 3 into a back propagation neural network, setting back propagation neural network parameters, and then starting training, wherein the training is specifically implemented according to the following steps:
step 4.1, setting the number of layers and the number of neurons, wherein the BPNN architecture comprises an input layer, two completely connected hidden layers and a soft-max classification layer;
step 4.2, setting iteration times;
4.3, setting a threshold value;
4.4, setting an input matrix and the expected output of the previous step;
step 4.5, randomly initializing the weight and the deviation;
step 4.6, making a training plan
The present invention may be described using the modified Back Propagation Neural Network (BPNN) algorithm as follows:
1) weight of
Figure BDA0002642200540000111
And a threshold value
Figure BDA0002642200540000112
Random initialization
2) In the training data set I to be preparedpAnd output data set OpAfter feeding into the neural network, the output of all layers is calculated using a formula, which is formulated as follows:
Figure BDA0002642200540000113
wherein the content of the first and second substances,
Figure BDA0002642200540000114
in order to be the weight, the weight is,
Figure BDA0002642200540000115
is a threshold value, yipIs the signal source, y, from the training data setjpRepresents the sum of the outputs of the data set from the signal sources 1 to N1, and the Function f, referred to as the Activation Function or Transfer Function, is defined in equation (15);
3) in each layer, the square root error is calculated as follows, formulated as follows:
Figure BDA0002642200540000116
where f' is a derivative of the function f, defined in equation (16), dpFor the initialized standard distance, L represents the number of network layers;
in the i-th hidden layer (i ═ L-1, L-2.. i), the following form is formulated:
Figure BDA0002642200540000117
wherein f' is a derivative of the function f, inAs defined in the formula (16),
Figure BDA0002642200540000118
in order to be the weight, the weight is,
4) the weight change between input and output is formulated as follows:
Figure BDA0002642200540000119
Figure BDA00026422005400001110
wherein the content of the first and second substances,
Figure BDA0002642200540000121
the weight is represented by a weight that is,
Figure BDA0002642200540000122
represents a threshold value;
5) if the mean square error is greater than the threshold, return to step 2, otherwise, stop and print the weight value, use many neuron activation functions in the neural network, and use sigmoid function in the proposed system, formulate the function and its derivatives as follows:
Figure BDA0002642200540000123
f′(x)=(x)(1-f(x)) (16);
because it is difficult to achieve the least mean square error using sigmoid activation functions in large scale neural network systems, the present invention uses a hybrid neuron-modified back propagation algorithm, expressed in the formula:
f(x)=λ·s(x)+(1-λ)·h(x) (17);
where λ is an artificially set parameter called Hyper-Parameters, λ ≠ 0, s (x) is a sign function, defined in equation (2), h (x) is a hard-limiting function, defined in equation (18), and the derivative is represented by equation (19) as follows:
Figure BDA0002642200540000124
f′(x)=λs(x)(1-s(x))λ≠0 (19);
wherein λ is an artificially set parameter called Hyper-Parameters(s), and s (x) is a sign function defined in formula (2);
the neural network often falls into a local minimum and the learning speed is updated according to equation (20), in order to make the neural network faster and reach zero error, the coefficient α is added to the steepness of the sigmoid function defined in the equation, which is expressed in the form:
λ(n)=e-1/SSE (20);
Figure BDA0002642200540000131
wherein SSE is the sum of squares error and α is a coefficient;
the derivative is formulated as follows: :
f′(x)=α·f(x)(1-f(x)) (22);
wherein α is a coefficient, f (x) is defined in formula (21);
and 5, applying the steps 1 to 3 to each test image, feeding the test image data into the back propagation neural network trained in the step 4 for recognition, and obtaining output.
The face recognition method based on LBPH, M-KNN and BPNN fully utilizes the local features of the picture and the BPNN, and provides a method for combining five distance formulas by square sum (RSS), which can integrate the advantages of the existing five distance formulas, further improve the precision and the efficiency and realize the accurate retrieval face recognition in large-scale data.

Claims (7)

1. A face image recognition method based on local binary pattern multi-distance learning is characterized by comprising the following steps:
step 1, preprocessing an original image;
step 2, extracting important local features in the image by using a local binary pattern, and converting the important local features into global description by combining a histogram method;
step 3, acquiring a T-data training set collected based on the correlation among the new representations of all the images;
step 4, putting the T-data training set obtained in the step 3 into a back propagation neural network, setting back propagation neural network parameters, and then starting training;
and 5, applying the steps 1 to 3 to each test image, feeding the test image data into the back propagation neural network trained in the step 4 for recognition, and obtaining output.
2. The method as claimed in claim 1, wherein the step 1 specifically includes resizing and cropping the image to eliminate the human face background effect, performing gray processing on the image and using histogram equalization to build a robust human face recognition system, thereby reducing noise and illumination.
3. The method for recognizing the face image based on the local binary pattern multi-distance learning as claimed in claim 1, wherein the step 2 is implemented according to the following steps:
step 2.1, dividing the preprocessed image into 5x5 cells;
step 2.2, performing threshing processing on the 3 multiplied by 3 neighborhood of each pixel by using a central value and regarding the result as a binary number, thereby applying an LBP method to the image pixels;
step 2.3, connecting the new cell descriptions divided in step 2.1 by using a histogram method to obtain a new representation of each image, i.e. an LBPH representation.
4. The method for recognizing the facial image based on the local binary pattern multi-distance learning as claimed in claim 1, wherein the mathematical formula of the LBP operator in the step 2.2 is as follows:
Figure FDA0002642200530000021
wherein x represents a central element whose pixel value is G (x), G (x)i) The pixel value representing the ith element in the field, s (x) is a sign function defined as follows:
Figure FDA0002642200530000022
the LBPH in step 2.3 is formulated as follows:
Figure FDA0002642200530000023
where P is the sample point, R is the radius, lbp (x) is the sign function, defined in equation (1), with i, j being the corresponding element coordinates.
5. The method for recognizing the face image based on the local binary pattern multi-distance learning as claimed in claim 1, wherein the step 3 is implemented according to the following steps:
3.1, calculating the distance between each image and other images by using five distance methods based on the LBPH representation of each image obtained in the step 2;
step 3.2, combine five distances using the square root of the sum of squares, as follows:
Figure FDA0002642200530000024
wherein DISiIn one of the distance methods, α is an intensity factor, and the intensity factor is specifically assigned as: mahalanobis and manHatton is 0.3, kanperra is 0.2, correlation and euclidean are 0.1;
and 3.3, acquiring a training data set after the five distance methods and the RSS method are applied, namely a T-data training set.
6. The method for recognizing the face image based on the local binary pattern multi-distance learning according to claim 5, wherein the five distance methods are respectively as follows:
the mahalanobis distance method utilizes a covariance matrix between two vectors, which is formulated as follows:
Figure FDA0002642200530000031
where a, b are the corresponding two vectors, S-1Is the inverse of the covariance matrix;
correlation distance classification, the measure of correlation is equal to zero and is sensitive to the linear relationship between two vectors, formulated as follows:
Figure FDA0002642200530000032
wherein a and b are two corresponding vectors, Cov is covariance, and σ a and σ b are standard deviations of a and b;
the euclidean distance method is the basis for many similar and dissimilar methods. The following formula is used to calculate the euclidean distance between corresponding elements of the two vector spaces:
Figure FDA0002642200530000033
wherein a and b are two corresponding vectors;
the Kanbera distance method is a numerical measure of the distance between two points in vector space, and is expressed by the following formula:
Figure FDA0002642200530000034
wherein a and b are two corresponding vectors;
the manhattan distance method is another method for measuring the distance between two vectors, and is formulated as follows:
Figure FDA0002642200530000041
where a, b are the corresponding two vectors.
7. The method for recognizing the face image based on the local binary pattern multi-distance learning as claimed in claim 1, wherein the step 4 is implemented according to the following steps:
step 4.1, setting the number of layers and the number of neurons;
step 4.2, setting iteration times;
4.3, setting a threshold value;
4.4, setting an input matrix and the expected output of the previous step;
step 4.5, randomly initializing the weight and the deviation;
and 4.6, making a training plan.
CN202010843287.XA 2020-08-20 2020-08-20 Face image recognition method based on local binary pattern multi-distance learning Pending CN112199986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010843287.XA CN112199986A (en) 2020-08-20 2020-08-20 Face image recognition method based on local binary pattern multi-distance learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010843287.XA CN112199986A (en) 2020-08-20 2020-08-20 Face image recognition method based on local binary pattern multi-distance learning

Publications (1)

Publication Number Publication Date
CN112199986A true CN112199986A (en) 2021-01-08

Family

ID=74006462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010843287.XA Pending CN112199986A (en) 2020-08-20 2020-08-20 Face image recognition method based on local binary pattern multi-distance learning

Country Status (1)

Country Link
CN (1) CN112199986A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837161A (en) * 2021-11-29 2021-12-24 广东东软学院 Identity recognition method, device and equipment based on image recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156887A (en) * 2011-03-28 2011-08-17 湖南创合制造有限公司 Human face recognition method based on local feature learning
US20170213074A1 (en) * 2016-01-27 2017-07-27 Intel Corporation Decoy-based matching system for facial recognition
CN108388862A (en) * 2018-02-08 2018-08-10 西北农林科技大学 Face identification method based on LBP features and nearest neighbor classifier

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156887A (en) * 2011-03-28 2011-08-17 湖南创合制造有限公司 Human face recognition method based on local feature learning
US20170213074A1 (en) * 2016-01-27 2017-07-27 Intel Corporation Decoy-based matching system for facial recognition
CN108388862A (en) * 2018-02-08 2018-08-10 西北农林科技大学 Face identification method based on LBP features and nearest neighbor classifier

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MOHANNAD A. ABUZNEID ET AL.: "Enhanced Human Face Recognition Using LBPH Descriptor, Multi-KNN, and Back-Propagation Neural Network", 《IEEE ACCESS》 *
杨余旺等: "多距离分类器组合试验在人脸识别中的应用", 《计算机工程》 *
王斌斌等: "基于全局与局部特征融合的人脸识别", 《现代计算机》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837161A (en) * 2021-11-29 2021-12-24 广东东软学院 Identity recognition method, device and equipment based on image recognition

Similar Documents

Publication Publication Date Title
Zhao et al. Finger vein recognition based on lightweight CNN combining center loss and dynamic regularization
CN109271895B (en) Pedestrian re-identification method based on multi-scale feature learning and feature segmentation
WO2020114118A1 (en) Facial attribute identification method and device, storage medium and processor
Cheng et al. Exploiting effective facial patches for robust gender recognition
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN111611877B (en) Anti-age-interference face recognition method based on multi-time-space information fusion
CN111126240B (en) Three-channel feature fusion face recognition method
CN110633708A (en) Deep network significance detection method based on global model and local optimization
CN112232184B (en) Multi-angle face recognition method based on deep learning and space conversion network
Fang et al. Muti-stage learning for gender and age prediction
Liu et al. An end-to-end deep model with discriminative facial features for facial expression recognition
CN109977887A (en) A kind of face identification method of anti-age interference
CN111274883B (en) Synthetic sketch face recognition method based on multi-scale HOG features and deep features
CN113221655A (en) Face spoofing detection method based on feature space constraint
CN112560858A (en) Character and picture detection and rapid matching method combining lightweight network and personalized feature extraction
CN114511901A (en) Age classification assisted cross-age face recognition algorithm
Xie et al. Writer-independent online signature verification based on 2D representation of time series data using triplet supervised network
CN112199986A (en) Face image recognition method based on local binary pattern multi-distance learning
CN111931630B (en) Dynamic expression recognition method based on facial feature point data enhancement
CN110633631B (en) Pedestrian re-identification method based on component power set and multi-scale features
CN114462466A (en) Deep learning-oriented data depolarization method
Mahmoodzadeh Human Activity Recognition based on Deep Belief Network Classifier and Combination of Local and Global Features
CN113011436A (en) Traditional Chinese medicine tongue color and fur color collaborative classification method based on convolutional neural network
CN106971168A (en) Multi-direction multi-level dual crossing robustness recognition methods based on human face structure feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination