CN109740578A - It is a kind of suitable for illumination, the face identification method of posture, expression shape change - Google Patents

It is a kind of suitable for illumination, the face identification method of posture, expression shape change Download PDF

Info

Publication number
CN109740578A
CN109740578A CN201910153743.5A CN201910153743A CN109740578A CN 109740578 A CN109740578 A CN 109740578A CN 201910153743 A CN201910153743 A CN 201910153743A CN 109740578 A CN109740578 A CN 109740578A
Authority
CN
China
Prior art keywords
formula
layer
facial image
hidden
local binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910153743.5A
Other languages
Chinese (zh)
Inventor
孙崐
李晓彤
殷欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201910153743.5A priority Critical patent/CN109740578A/en
Publication of CN109740578A publication Critical patent/CN109740578A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of suitable for illumination, the face identification method of posture, expression shape change, and this method obtains facial image first from ORL, Extend Yale B and CMU-PIE facial image database and carries out piecemeal processing;Secondly, extracting the textural characteristics of each sub-block of facial image using central symmetry local binary patterns;Thirdly, textural characteristics are formed into textural characteristics statistic histogram, and is input to the visual layers of deepness belief network;Finally, completing the classification and identification of facial image by deep learning.On this basis, it by the face recognition experiment in facial image database, has shown that the optimal partitioned mode of different faces library facial image and optimum depth belief network hide units, has completed the comparative experiments with a variety of face identification methods.The present invention is used for feature extraction using central symmetry local binary patterns, can reduce the computation complexity of feature extraction, discrimination with higher, and the influence for small illumination, posture and expression shape change has certain inhibiting effect.

Description

It is a kind of suitable for illumination, the face identification method of posture, expression shape change
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of suitable for illumination, the face of posture, expression shape change Recognition methods.
Background technique
This informationization rapidly develop epoch, authentication be widely used in life every aspect, as airport, The systems of real name such as train detection system, public security personal information acquisition system, dormitory intelligent door lock system etc., however people are pursuing just Also increasingly focus on information security issue while victory life.In traditional biological feather recognition method, fingerprint recognition presence pair The all very sensitive problem such as humidity, cleannes of finger, dirty, oily, water can all influence recognition effect, and discrimination is low;Iris recognition There are problems that easily pretending, identification certainty is poor;Gait Recognition there is a problem of being not easy to be captured and easy to be lost.Compared to above-mentioned The extraction of three kinds of biological characteristics is put, and recognition of face has the advantages that accurate biological characteristic, high reliablity, easily capture, thus also at For most popular one of recognition methods.
With gradually developing for computer vision technique and gradually increasing for human-computer interaction demand, face identification method is gradually It is generalized the technical fields such as secure payment, mobile phone unlock and intelligent door lock.The collected facial image in these fields is generally Under the conditions of unrestricted, the problems such as illumination variation, human face posture variation and expression shape change, can all be such that discrimination reduces at this time, therefore There are still many challenges for the research of face identification method under the conditions of unrestricted.Liang Shufen proposes to be based on LBP and depth conviction The face identification method of network integration, the facial image feature vector that LBP is extracted are inputted as deepness belief network, make depth Belief network can learn face Local textural feature, achieve the purpose that improve discrimination.And LBP has illumination and invariable rotary Property, therefore, the method also has certain inhibiting effect to illumination and rotation.But through further research, it has been found that the line that LBP is extracted It is sparse to manage feature, calculates dimension height, noise resisting ability is poor, keeps depth network computationally intensive in learning process, and time-consuming, network It is not readily reachable by global optimum.
Summary of the invention
The present invention is directed to the demand and shortcoming of current technology development, proposes a kind of suitable for illumination, posture, expression change The face identification method of change has good recognition effect.
The present invention propose in order to solve the above problem it is a kind of suitable for illumination, the face identification method of posture, expression shape change, Steps are as follows for the concrete scheme of use.
S1, it obtains facial image: downloading facial image from facial image database.
S2, all people's face image is divided into training set and test set, and carries out piecemeal processing to it, every image is divided intoA sub-block.
S3, using the coding rule of central symmetry local binary patterns, its texture eigenvalue, feature are extracted to each sub-block Value is usedIt indicates.
S4, central symmetry local binary patterns textural characteristics histogram is established, indicates each sub-block using statistic histogram Local textural feature;TheThe histogram table of a block is shown as:
(1)
In formula (1),,It is equal to for central symmetry local binary patterns texture eigenvalue in sub-blockFrequency Rate,,For, as 16.
S5, the feature histogram of each sub-block is orderly connected to form to central symmetry local binary patterns extraction facial image Feature
S6, the texture feature vector for obtaining step S5Be input to the visual layers of deepness belief network, visual layers with it is hidden It is as follows according to the Joint Distribution of formula to hide layer:
(2)
In formula (2),Centered on the textural characteristics that extract of symmetrical local binary patterns,It is depth conviction net Network is to input feature vectorThe advanced features of the different levels of study, hidden layer of the present invention are set as 2 layers, and can be obtained by formula (2) can It is as follows depending on the Joint Distribution of layer and two layers of hidden layer:
(3)
In formula (3),For visual layers,For first hidden layer,For second hidden layer, according to the visual of visual layers The relationship of the hidden unit of unit and first hidden layer can obtain the activation probability of the hidden unit of first layer hidden layer, as follows:
(4)
In formula (4),For visual element,For visual element number,For hidden unit,For activation primitive,It isIt is a Visual element and theThe weighted value of a hidden unit connection.
S7, weight is carried out using deepness belief network iterative algorithmOptimization, obtained optimal trained network, iteration time Number is, the judgment basis of optimal network is that the maximum generating probability functional value of training set is maximum, and maximum generating probability function is such as Under:
(5)
In formula (5),For weight matrix,For central symmetry local binary patterns textural characteristics matrix in training set, wherein;By adjusting learning rate 0.001.
After S8, the optimal network top layer for obtaining step S7 are using classifier classification, the class label of test sample is obtained.
Preferentially, the processing of piecemeal described in step S2 calculates separately several partitioned modes to find optimal partitioned mode In the case of, the discrimination in different faces library, partitioned mode when choosing discrimination highest is the best piecemeal of corresponding face database Mode.
Preferentially, central symmetry local binary patterns described in step S3 are by comparing using central pixel point as symmetrical centre Two pixels (i.e.With) gray value size, when comparison result be greater than 0 when, corresponding binary coding Position is 1, otherwise is 0, and convert the decimal system for binary system and obtain central symmetry local binary patterns characteristic value, such as formula (6) (7) shown in;
(6)
(7)
In formula (6),Number for center pixel, center pixel surrounding neighbors pixel isI.e.,For symbol letter Number, as shown in formula (7),It is the gray value of pixel in central pixel point surrounding neighbors.
Preferentially, have in step S6 for oneLayer hidden unit deepness belief network for, visual layers can Depending on unit andShown in the Joint Distribution of a hidden unit such as formula (8):
(8)
In formula (8),Indicate theThe biasing of layer;It isLayer and theWeight between layer;In depth conviction net In network,It is considered as a limited Boltzmann machine model, when input isWhen, pass throughObtain hidden layer ;When input isWhen, pass throughReconstruct visual layers, this process is deep learning process, and wherein deepness belief network is hidden Hiding layer makes discrimination highest of the invention there is optimal hiding units, the present invention by the experiment in face database, The best concealment units of the deepness belief network hidden layer of each face database is obtained respectively.
Of the invention is a kind of suitable for illumination, the face identification method of posture, expression shape change, has compared with prior art Following advantages.
The present invention can reduce the computation complexity of feature extraction.It can be seen that by above-mentioned technical proposal, central symmetry part Binary pattern compares symmetrical with central pixel pointWithGray value, intrinsic dimensionality is low, computation complexity is low, is more Compact description operator, can capture gradient information, be that useful information is more abundant.
The present invention has stronger noise resisting ability, therefore discrimination with higher.Due to the influence of noise, such as image Head weak vibrations, cause the pixel point value of image to change, and central symmetry local binary patterns are compared with center pixel The value of two pixels of point symmetry, therefore the influence of noise on image discrimination can be reduced.
Influence of the present invention for small illumination, posture and expression shape change has certain inhibiting effect.Due to center Symmetrical local binary patterns have illumination and rotational invariance, the textural characteristics extractedIt does not change, i.e. input depth The textural characteristics of belief network also do not change, therefore the influence for small illumination, posture and expression shape change has one Fixed inhibiting effect.
Detailed description of the invention
Fig. 1 is the flow diagram of the embodiment of the present invention.
Fig. 2 is ORL face database schematic diagram.
Fig. 3 is Extend Yale B face database schematic diagram.
Fig. 4 is CMU-PIE human face data schematic diagram.
Fig. 5 is influence schematic diagram of the different piecemeal situation of ORL to discrimination.
Fig. 6 is influence schematic diagram of the different piecemeal situation of Extend Yale B to discrimination.
Fig. 7 is influence schematic diagram of the piecemeal situation different in the library CMU-PIE to discrimination.
Symmetrical local binary patterns feature extraction flow chart centered on Fig. 8.
The facial image characteristic extraction procedure figure of symmetrical local binary patterns centered on Fig. 9.
Figure 10 is different hiding influence schematic diagrames of the units to discrimination in the library ORL.
Figure 11 is different hiding influence schematic diagrames of the units to discrimination in the library Extend Yale B.
Figure 12 is different hiding influence schematic diagrames of the units to discrimination in the library CMU-PIE.
Figure 13 is pair of identification methods in ORL, Extend Yale B and CMU-PIE these three face databases Compare experimental result.
Specific embodiment
For technical solution of the present invention, feature and technical effect is more clearly understood, tied in detail below with reference to attached drawing It closes exemplary embodiment and clearer, specific description, technical solution of the present invention step is carried out to technical solution of the present invention It is rapid as shown in Figure 1.
Embodiment.
Step 1: facial image is obtained;Facial image of the present invention be all from ORL face database, It is downloaded in these three common face databases of Extend Yale B face database and CMU-PIE face database, face Image schematic diagram is successively shown as shown in Figure 2, Figure 3 and Figure 4.
Step 2: all people's face image is divided into training set and test set, and carries out piecemeal processing, every image to it It is divided intoA sub-block.A certain number of facial images are chosen in three face databases respectively as test set and training set, and Discrimination of the present invention in several different piecemeals is calculated separately, using discrimination highest partitioned mode as each human face data The best partitioned mode in library, as a result as shown in Fig. 5, Fig. 6 and Fig. 7.
Step 3: using the coding rule of central symmetry local binary patterns, extracting its texture eigenvalue to each sub-block, Characteristic value is usedIt indicates, wherein central symmetry local binary patterns feature extraction flow chart is as shown in Figure 8.
Step 4: establishing central symmetry local binary patterns textural characteristics histogram, indicates each son using statistic histogram The Local textural feature of block.
Step 5: the feature histogram of each sub-block is orderly connected to form central symmetry local binary patterns and extracts face figure The feature of picture, as shown in Figure 9.
Step 6: the texture feature vector that step 5 is obtainedThe visual layers of deepness belief network are input to, the present invention 2 layers are set by hidden layer first, visual layers and two layers then can be obtained according to the Joint Distribution of formula by visual layers and hidden layer The Joint Distribution of hidden layer finally can obtain the according to the relationship of the visual element of visual layers and the hidden unit of first hidden layer The activation probability of the hidden unit of one layer of hidden layer, wherein there is optimal hiding units for the hidden layer of deepness belief network So that discrimination highest of the invention, best when the present invention identifies the facial image in three face databases to find Hiding units, a certain number of training sets and test set are chosen in three face database databases respectively, in three faces Library respectively under conditions of best partitioned mode, using method proposed by the present invention, carries out face recognition experiment, experimental result is as schemed 10, shown in Figure 11 and Figure 12.
Step 7: weight is carried out using deepness belief network iterative algorithmOptimization, obtained optimal trained network, repeatedly Generation number is 30, and the judgment basis of optimal network is that the maximum generating probability functional value of training set is maximum, by adjusting learning rate It is 0.001.
Step 8: after the optimal network top layer that step 7 is obtained is using classifier classification, the classification of test sample is obtained Label, the present invention and other common face identification methods are shown in recognition result Figure 13 in three face databases.
In conclusion the embodiment of the present invention is a kind of suitable for illumination, the recognition of face of posture, expression shape change by providing Method, comprising: downloading facial image is simultaneously divided into training set and test set;Face figure is extracted using central symmetry local binary patterns The feature of picture simultaneously generates textural characteristics statistic histogram;The feature of extraction is input to the visual layers of deepness belief network;It utilizes Deepness belief network iterative algorithm carries out weight optimization, obtains the optimal trained network of deepness belief network, on this basis, leads to Cross in face database carry out many experiments obtain image optimal partitioned mode and deepness belief network hidden layer it is best Units is hidden, since embodiment is similar to method and step, so describe fairly simple, use above specific embodiment pair The principle of the present invention and embodiment are elaborated, and embodiment is merely used to help understand core of the invention Technique Beauty Hold, and the protection scope being not intended to restrict the invention, technical solution of the present invention are not limited in above-mentioned specific embodiment.

Claims (4)

1. a kind of suitable for illumination, the face identification method of posture, expression shape change, which is characterized in that this method includes following step It is rapid:
S1, it obtains facial image: downloading facial image from facial image database;
S2, all people's face image is divided into training set and test set, and carries out piecemeal processing to it, every image is divided intoIt is a Sub-block;
S3, using the coding rule of central symmetry local binary patterns, its texture eigenvalue is extracted to each sub-block, characteristic value is usedIt indicates;
S4, central symmetry local binary patterns textural characteristics histogram is established, the part of each sub-block is indicated using statistic histogram Textural characteristics;TheThe histogram table of a block is shown as:
(1)
In formula (1),,It is equal to for central symmetry local binary patterns texture eigenvalue in sub-blockFrequency,,For, as 16;
S5, the feature histogram of each sub-block is orderly connected to form to the feature that central symmetry local binary patterns extract facial image
S6, the texture feature vector for obtaining step S5It is input to the visual layers of deepness belief network, visual layers and hidden layer root It is as follows according to the Joint Distribution of formula:
(2)
In formula (2),Centered on the textural characteristics that extract of symmetrical local binary patterns,It is deepness belief network To input feature vectorThe advanced features of the different levels of study, hidden layer of the present invention are set as 2 layers, can be obtained visually by formula (2) The Joint Distribution of layer and two layers of hidden layer is as follows:
(3)
In formula (3),For visual layers,For first hidden layer,For second hidden layer, according to the visual of visual layers The relationship of the hidden unit of unit and first hidden layer can obtain the activation probability of the hidden unit of first layer hidden layer, as follows:
(4)
In formula (4),For visual element,For visual element number,For hidden unit,For activation primitive,It isIt is a can Depending on unit andThe weighted value of a hidden unit connection;
S7, weight is carried out using deepness belief network iterative algorithmOptimization, obtained optimal trained network, the number of iterations are , the judgment basis of optimal network is that the maximum generating probability functional value of training set is maximum, and maximum generating probability function is as follows:
(5)
In formula (5),For weight matrix,For central symmetry local binary patterns textural characteristics matrix in training set, wherein;By adjusting learning rate 0.001;
After S8, the optimal network top layer for obtaining step S7 are using classifier classification, the class label of test sample is obtained.
2. according to claim 1 suitable for illumination, the face identification method of posture, expression shape change, it is characterised in that: step Piecemeal processing described in rapid S2 is the optimal partitioned mode of searching, in the case of calculating separately several partitioned modes, different faces library In discrimination, choose discrimination highest when partitioned mode be corresponding face database best partitioned mode.
3. according to claim 1 suitable for illumination, the face identification method of posture, expression shape change, it is characterised in that: step Central symmetry local binary patterns described in rapid S3 by comparing using central pixel point as two pixels of symmetrical centre (i.e.With) gray value size, when comparison result be greater than 0 when, corresponding binary coding position be 1, otherwise be 0, and The decimal system, which is converted, by binary system obtains central symmetry local binary patterns characteristic value, as shown in formula (6) and (7):
(6)
(7)
In formula (6),Number for center pixel, center pixel surrounding neighbors pixel isI.e.,For symbol letter Number, as shown in formula (7),It is the gray value of pixel in central pixel point surrounding neighbors.
4. according to claim 1 suitable for illumination, the face identification method of posture, expression shape change, it is characterised in that: step Have in rapid S6 for oneFor the deepness belief network of layer hidden unit, the visual element of visual layers and theIt is a Shown in the Joint Distribution of hidden unit such as formula (8):
(8)
In formula (8),Indicate theThe biasing of layer;It isLayer and theWeight between layer, in depth conviction net In network,It is considered as a limited Boltzmann machine model, when input isWhen, pass throughObtain hidden layer ;When input isWhen, pass throughReconstruct visual layers, this process is deep learning process, and wherein deepness belief network is hidden Hiding layer makes discrimination highest of the invention there is optimal hiding units, the present invention by the experiment in face database, The best concealment units of the deepness belief network hidden layer of each face database is obtained respectively.
CN201910153743.5A 2019-03-01 2019-03-01 It is a kind of suitable for illumination, the face identification method of posture, expression shape change Pending CN109740578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910153743.5A CN109740578A (en) 2019-03-01 2019-03-01 It is a kind of suitable for illumination, the face identification method of posture, expression shape change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910153743.5A CN109740578A (en) 2019-03-01 2019-03-01 It is a kind of suitable for illumination, the face identification method of posture, expression shape change

Publications (1)

Publication Number Publication Date
CN109740578A true CN109740578A (en) 2019-05-10

Family

ID=66368943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910153743.5A Pending CN109740578A (en) 2019-03-01 2019-03-01 It is a kind of suitable for illumination, the face identification method of posture, expression shape change

Country Status (1)

Country Link
CN (1) CN109740578A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287780A (en) * 2019-05-17 2019-09-27 长安大学 A kind of illumination human face image characteristic extracting method
CN110555460A (en) * 2019-07-31 2019-12-10 国网江苏省电力有限公司 Image slice-based bird detection method for power transmission line at mobile terminal
CN110638464A (en) * 2019-09-10 2020-01-03 哈尔滨亿尚医疗科技有限公司 Monitor, control method and device thereof, and computer-readable storage medium
CN111339856A (en) * 2020-02-17 2020-06-26 淮阴工学院 Deep learning-based face recognition method and recognition system under complex illumination condition
CN111709312A (en) * 2020-05-26 2020-09-25 上海海事大学 Local feature face recognition method based on joint main mode
CN114187641A (en) * 2021-12-17 2022-03-15 哈尔滨理工大学 Face recognition method based on GCSLBP and DBN

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729890A (en) * 2017-11-30 2018-02-23 华北理工大学 Face identification method based on LBP and deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729890A (en) * 2017-11-30 2018-02-23 华北理工大学 Face identification method based on LBP and deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287780A (en) * 2019-05-17 2019-09-27 长安大学 A kind of illumination human face image characteristic extracting method
CN110555460A (en) * 2019-07-31 2019-12-10 国网江苏省电力有限公司 Image slice-based bird detection method for power transmission line at mobile terminal
CN110638464A (en) * 2019-09-10 2020-01-03 哈尔滨亿尚医疗科技有限公司 Monitor, control method and device thereof, and computer-readable storage medium
CN111339856A (en) * 2020-02-17 2020-06-26 淮阴工学院 Deep learning-based face recognition method and recognition system under complex illumination condition
CN111709312A (en) * 2020-05-26 2020-09-25 上海海事大学 Local feature face recognition method based on joint main mode
CN111709312B (en) * 2020-05-26 2023-09-22 上海海事大学 Local feature face recognition method based on combined main mode
CN114187641A (en) * 2021-12-17 2022-03-15 哈尔滨理工大学 Face recognition method based on GCSLBP and DBN

Similar Documents

Publication Publication Date Title
CN109740578A (en) It is a kind of suitable for illumination, the face identification method of posture, expression shape change
Xin et al. Multimodal feature-level fusion for biometrics identification system on IoMT platform
Yuan et al. Fingerprint liveness detection using an improved CNN with image scale equalization
Zhang et al. Demeshnet: Blind face inpainting for deep meshface verification
Hangaragi et al. Face detection and Recognition using Face Mesh and deep neural network
CN108520216B (en) Gait image-based identity recognition method
Abaza et al. A survey on ear biometrics
Sheela et al. Iris recognition methods-survey
Bashir et al. Feature selection on gait energy image for human identification
Wang et al. Review of ear biometrics
CN103605972A (en) Non-restricted environment face verification method based on block depth neural network
CN102844766A (en) Human eyes images based multi-feature fusion identification method
Chirchi et al. Iris biometric recognition for person identification in security systems
Alheeti Biometric iris recognition based on hybrid technique
CN107169479A (en) Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication
Paul et al. Extraction of facial feature points using cumulative histogram
Abikoye et al. Iris feature extraction for personal identification using fast wavelet transform (FWT)
Mohamed et al. Avatar face recognition using wavelet transform and hierarchical multi-scale LBP
CN110222568B (en) Cross-visual-angle gait recognition method based on space-time diagram
Huang et al. Human emotion recognition based on face and facial expression detection using deep belief network under complicated backgrounds
Mohamed et al. Automated face recogntion system: Multi-input databases
Mohamed et al. Artificial face recognition using Wavelet adaptive LBP with directional statistical features
Kumar et al. Palmprint Recognition in Eigen-space
Liu et al. A novel high-resolution fingerprint representation method
Zhao et al. Cross-view gait recognition based on dual-stream network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190510