CN108205666A - A kind of face identification method based on depth converging network - Google Patents

A kind of face identification method based on depth converging network Download PDF

Info

Publication number
CN108205666A
CN108205666A CN201810056443.0A CN201810056443A CN108205666A CN 108205666 A CN108205666 A CN 108205666A CN 201810056443 A CN201810056443 A CN 201810056443A CN 108205666 A CN108205666 A CN 108205666A
Authority
CN
China
Prior art keywords
face
depth
network
converging network
identification method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810056443.0A
Other languages
Chinese (zh)
Inventor
傅桂霞
邹国锋
申晋
尹丽菊
杜钦君
高明亮
胡文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Technology
Original Assignee
Shandong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Technology filed Critical Shandong University of Technology
Priority to CN201810056443.0A priority Critical patent/CN108205666A/en
Publication of CN108205666A publication Critical patent/CN108205666A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The present invention relates to a kind of face identification methods based on deep layer converging network.The present invention includes the following steps:The first step reads facial image, facial image is divided into 4 sub-regions;Second step calculates the local binary patterns texture feature vector of all subregion face, removes interference information;Third walks, and the local binary patterns texture feature vector of 4 sub-regions is input in 4 sparse autocoders of different deep layers, realizes the further feature extraction of subregion face;The output feature at the sparse autocoder network of 4 depth is carried out characteristic aggregation by the 4th step by way of connecting entirely, forms total face feature vector for Classification and Identification.

Description

A kind of face identification method based on depth converging network
Technical field
The present invention relates to pattern-recognition and machine learning field, more particularly to a kind of based on depth converging network Face identification method.
Background technology
Deep learning simulates human brain by foundation and carries out analytic learning as a kind of machine learning algorithm of data-oriented The further feature extraction of neural fusion data.And computer vision and field of image recognition, it the great amount of images of acquisition and regards Frequency evidence does not often have label information, therefore extracts effective further feature tool from mass data in a manner of unsupervised learning There is important researching value(BENGIO Y, COURVILLE A, and VINCENT P. Representation learning: a review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8): 1798-1828.).
Sparse autocoder (Sparse Auto Encoder, SAE) is a kind of unsupervised deep learning network of classics, Input data is encoded to a kind of new expression by it first, feature decoding then is redeveloped into data untagged, and utilize input Data and reconstruction data calculate reconstructed error, pass through back-propagation algorithm training network, realize data critical structure feature information It excavates.Since SAE realizes the automatic study of feature, avoid excessive manual intervention, thus recognition of face, scene classification, The multiple fields such as behavior understanding are used widely(ZHANG F, DU B, ZHANG L. Saliency-guided unsupervised feature learning for scene classification. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(4):2175-2184.).
Recognition of face has the characteristics that visualization, the thinking habit for meeting people, in quotient as a kind of untouchable technology The fields such as industry, safety are applied successfully.But there are a variety of disturbing factors for the facial image acquired under the conditions of unconstrained(Illumination, posture, Expression etc.), directly using non-ideal face as the input of SAE, easily more non-face feature representation is arrived in study to depth network, And ignore the crucial partial structurtes feature of face.If in addition, facial image is directly inputted SAE networks, needing will be two-dimentional Image is converted to vector form, is easy to cause Structural Characteristics of the SAE networks without calligraphy learning to face, lost part identification classification Required key message.The characteristics of therefore, it is necessary to combine facial image, based on SAE networks, design new face characteristic net Network framework.
Invention content
The purpose of the present invention is to provide a kind of face identification method based on depth converging network, this method can be effective Overcome the shortcomings that depth network is to face noise-sensitive, the damage of human face structure information when avoiding being converted to image array into vector Lose, by the characteristic aggregation network architecture can learn to chromatograph it is clear in structure, Key detail sockdolager's face feature improves people Face recognition effect.The object of the present invention is achieved like this:
The present invention includes the following steps:
(1)Read in original facial image;
(2)Size normalized is carried out to original facial image;
(3)Facial image after normalization is employed to the piecemeal thinking of 2x2, is divided into 4 sub-regions;
(4)The feature extraction for realizing 4 people's face regions using depth converging network realizes Classification and Identification with merging.
The extraction of face characteristic employs depth converging network.
First, before using depth converging network feature extraction, first face sub-district area image is pre-processed.Using annulus Shape local binary pattern operator respectively pre-processes 4 people's face regions, and the texture that all subregion face is calculated is special Sign vector.
Then, using the local binary patterns textural characteristics in 4 people's face regions as the input vector of feature extraction network. And depth converging network realizes the face of 4 sub-regions by 4 groups of different sparse autocoder Innovation Collaboration Network extractors The further feature extraction of textural characteristics.
Finally, the face subregion further feature of 4 groups of different sparse autocoder extractions, using the full connection mode of network It realizes characteristic aggregation, forms total face feature vector for Classification and Identification.
Advantageous effect of the present invention is:
The present invention proposes a kind of face identification method based on deep layer converging network.This method uses local binary pattern operator Facial image containing disturbing factor is pre-processed, reduces network describes interference characteristic to a certain extent It practises;Original facial image is divided into 4 different subregions, and carry out further feature to different subregions using SAE networks and carry It takes, the total characteristic formed using full connection contains the local structure feature of key message and face in all subregion, carries The high identifiability of face key feature improves the effect of recognition of face.
Description of the drawings
Fig. 1 is the recognition of face flow chart based on depth converging network;
Fig. 2 is subregion face local binary patterns textural characteristics schematic diagram;
Fig. 3 is autocoder structure chart;
Fig. 4 is the sparse autocoder structure chart of depth;
Fig. 5 is the identifying system structure chart for merging subregion LBP features and depth converging network;
Fig. 6 is that the optimized parameter of depth converging network is configured;
Fig. 7 is to be compared with the experiment effect of other methods.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings:
Original facial image is divided into different by the face identification method provided by the invention based on depth converging network first Subregion;Secondly for different subregions, circle shaped neighborhood region local binary patterns are respectively adopted(Local Binary Pattern, LBP)Operator pre-processes the facial image containing disturbing factor, and the LBP textures for obtaining subregion face are special Sign;Then using the LBP features of different subregions as the input of multiple SAE networks, realize that the further feature of subregion face carries It takes;The output feature of multiple SAE networks is finally subjected to characteristic aggregation, forms total face feature vector for Classification and Identification.
1st, original facial image and its sub-zone dividing are read in.
Original facial image is read in, and size normalized is carried out to original facial image, face is normalized to 56x56 pixels.Using the public face databases of MIT-CBCL, wherein comprising 10 people, everyone 200 width images, totally 2000 width are different The facial image of posture, illumination and expression, everyone chooses 180 width images, and totally 1800 width images are used for network training, residue 200 Width image is used to test.
Because face resolution ratio is relatively low after normalization, thus sub-zone dividing be not easy it is meticulous too small, the present invention use 2x2 Piecemeal thinking, facial image is divided into 4 sub-regions.
2nd, the local binary patterns pretreatment of subregion facial image.
It with reference to Fig. 2, gives and feature is extracted using circular ring shape neighborhood local binary pattern operator respectively, obtain all subregion The schematic diagram of the local binary patterns texture feature vector of face.
Local binary patterns textural characteristics calculation formula is:
(1)
In formula, R represents a border circular areas radius defined in piece image, and N represents to be uniformly distributed N number of neighborhood picture on circumference Plain gray value.Centered on grey scale pixel value,It is the gray value of N number of neighborhood territory pixel.Function
3rd, the further feature extraction of subregion face textural characteristics.
It is special using the deep layer of the sparse autocoder real-time performance subregion face textural characteristics of 4 groups of depth after pretreatment Sign extraction.
With reference to Fig. 3, the network structure of autocoder is given, autocoder is a kind of 3 layers of neural network, respectively For input layer, hidden layer and output layer.Input layer forms the mark sheet of data by coding to the cataloged procedure that hidden layer is to data It reaches, hidden layer to output layer is the decoding process to data characteristics, and realizes network training using back-propagation algorithm so that decoding Output be equal to input.The data sample used in autocoder training does not have the guidance of class label, is by adjusting volume The parameter of code device and decoder makes network output and the reconstructed error of input data minimum, realizes the feature extraction of input data.
The reconstructed error function of autocoder is:
(2)
Wherein,mFor the quantity of sample,For input vector,For output vector,Set for parameter all in network.
Sparse autocoder(Sparse autoencoder, SAE)Core concept is:Hidden layer is constrained, makes it Become sparse.Using KL divergences, openness be limited to is added for autocoder:
(3)
Wherein,Represent the average active degree of hidden layer,Be be manually set close to 0 constant.It represents to divide Not withWithFor the relative entropy between two variables of mean value, calculation formula is as follows:
(4)
Then total reconstruct error formula of SAE is as follows:
(5)
Wherein,It is the weight factor for controlling sparse limitation.
The sparse autocoder of depth that the present invention is built is made of multilayer SAE cascades, and with reference to Fig. 4, it is dilute to give depth Dredge the structure chart of autocoder network.
4th, the polymerization and identification of subregion face characteristic.
After the further feature for obtaining subregion face, the output feature of multiple SAE networks is used into the full connection mode of network It is polymerize, obtains final feature, be then attached using full connection mode with grader layer, realizes Classification and Identification.
With reference to Fig. 5, for the feature that 4 people's face regions progress LBP operations obtain, it is denoted as L-F1, L-F2, L-F3 respectively And L-F4, the feature vector then extracted by deep layer SAE networks are denoted as F1, F2, F3 and F4, finally by full connection 4 groups of feature vectors polymerize by mode, and the total characteristic obtained is denoted as F.The key in each sub-regions is contained in total characteristic F The local structure feature of information and face, and eventually for Classification and Identification.
With reference to Fig. 6, the depth converging network structural parameter assignment situation designed by the present invention is given.With reference to Fig. 7, difference Give " the original face of whole picture+optimal depth network ", " whole picture LBP features+optimal depth network ", " sub-zone dividing+original The Experimental comparison results of three kinds of situations of face+optimal converging network " and method proposed by the invention, have absolutely proved institute of the present invention The validity of design method.

Claims (6)

1. a kind of face identification method based on depth converging network, which is characterized in that include the following steps:
(1)Read in original facial image;
(2)Size normalized is carried out to original facial image;
(3)Facial image after normalization is employed to the piecemeal thinking of 2x2, is divided into 4 sub-regions;
(4)The feature extraction for realizing 4 people's face regions using depth converging network realizes Classification and Identification with merging.
2. the face identification method according to claim 1 based on depth converging network, it is characterised in that:Face characteristic Extraction employs depth converging network.
3. according to the face identification method based on depth converging network described in claim 1 and 2, it is characterised in that:Using circle Annular local binary pattern operator respectively pre-processes 4 people's face regions, and the texture of all subregion face is calculated Feature vector.
4. according to the face identification method based on depth converging network described in claim 1,2 and 3, it is characterised in that:Depth Converging network builds the input vector using the local binary patterns textural characteristics in 4 people's face regions as feature extraction network.
5. according to the face identification method based on depth converging network described in claim 1 and 2, it is characterised in that:Depth is gathered Network is closed by 4 groups of different sparse autocoder Innovation Collaboration Network extractors, realizes the face textural characteristics of 4 sub-regions Further feature extraction.
6. according to the face identification method based on depth converging network described in claim 1,2 and 5, it is characterised in that:4 groups are not With the face subregion further feature of sparse autocoder extraction, characteristic aggregation is realized using the full connection mode of network, is formed Total face feature vector is used for Classification and Identification.
CN201810056443.0A 2018-01-21 2018-01-21 A kind of face identification method based on depth converging network Pending CN108205666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810056443.0A CN108205666A (en) 2018-01-21 2018-01-21 A kind of face identification method based on depth converging network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810056443.0A CN108205666A (en) 2018-01-21 2018-01-21 A kind of face identification method based on depth converging network

Publications (1)

Publication Number Publication Date
CN108205666A true CN108205666A (en) 2018-06-26

Family

ID=62605135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810056443.0A Pending CN108205666A (en) 2018-01-21 2018-01-21 A kind of face identification method based on depth converging network

Country Status (1)

Country Link
CN (1) CN108205666A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522867A (en) * 2018-11-30 2019-03-26 国信优易数据有限公司 A kind of video classification methods, device, equipment and medium
CN110969646A (en) * 2019-12-04 2020-04-07 电子科技大学 Face tracking method adaptive to high frame rate
CN113657498A (en) * 2021-08-17 2021-11-16 展讯通信(上海)有限公司 Biological feature extraction method, training method, authentication method, device and equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763507B (en) * 2010-01-20 2013-03-06 北京智慧眼科技发展有限公司 Face recognition method and face recognition system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763507B (en) * 2010-01-20 2013-03-06 北京智慧眼科技发展有限公司 Face recognition method and face recognition system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DONG YI ETAL.: "Deep Metric Learning for Practical Person Re-Identificaiton", 《JOURNAL OF LATEX CLASS FILES》 *
易炎等: "基于LBP和栈式自动编码器的人脸识别算法研究", 《计算机工程与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522867A (en) * 2018-11-30 2019-03-26 国信优易数据有限公司 A kind of video classification methods, device, equipment and medium
CN110969646A (en) * 2019-12-04 2020-04-07 电子科技大学 Face tracking method adaptive to high frame rate
CN113657498A (en) * 2021-08-17 2021-11-16 展讯通信(上海)有限公司 Biological feature extraction method, training method, authentication method, device and equipment
CN113657498B (en) * 2021-08-17 2023-02-10 展讯通信(上海)有限公司 Biological feature extraction method, training method, authentication method, device and equipment

Similar Documents

Publication Publication Date Title
CN113221639B (en) Micro-expression recognition method for representative AU (AU) region extraction based on multi-task learning
CN109902585B (en) Finger three-mode fusion recognition method based on graph model
Zhang et al. One-two-one networks for compression artifacts reduction in remote sensing
CN108734711A (en) The method that semantic segmentation is carried out to image
CN113378906B (en) Unsupervised domain adaptive remote sensing image semantic segmentation method with feature self-adaptive alignment
CN105205453B (en) Human eye detection and localization method based on depth self-encoding encoder
Gong et al. Advanced image and video processing using MATLAB
CN111401384A (en) Transformer equipment defect image matching method
CN111738231A (en) Target object detection method and device, computer equipment and storage medium
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN105718889A (en) Human face identity recognition method based on GB(2D)2PCANet depth convolution model
CN104298974A (en) Human body behavior recognition method based on depth video sequence
Dong et al. Infrared image colorization using a s-shape network
JP2021532453A (en) Extraction of fast and robust skin imprint markings using feedforward convolutional neural networks
CN114187450A (en) Remote sensing image semantic segmentation method based on deep learning
CN108205666A (en) A kind of face identification method based on depth converging network
CN113822314A (en) Image data processing method, apparatus, device and medium
CN108038486A (en) A kind of character detecting method
CN106355210B (en) Insulator Infrared Image feature representation method based on depth neuron response modes
CN112560865A (en) Semantic segmentation method for point cloud under outdoor large scene
CN113111758A (en) SAR image ship target identification method based on pulse neural network
CN107766810B (en) Cloud and shadow detection method
CN114463340B (en) Agile remote sensing image semantic segmentation method guided by edge information
Bounsaythip et al. Genetic algorithms in image processing-a review
Zuo et al. A remote sensing image semantic segmentation method by combining deformable convolution with conditional random fields

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180626

WD01 Invention patent application deemed withdrawn after publication