CN109766792A - A kind of personal identification method based on facial image - Google Patents

A kind of personal identification method based on facial image Download PDF

Info

Publication number
CN109766792A
CN109766792A CN201811589337.5A CN201811589337A CN109766792A CN 109766792 A CN109766792 A CN 109766792A CN 201811589337 A CN201811589337 A CN 201811589337A CN 109766792 A CN109766792 A CN 109766792A
Authority
CN
China
Prior art keywords
face
network
frame
picture
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811589337.5A
Other languages
Chinese (zh)
Inventor
路小波
秦晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201811589337.5A priority Critical patent/CN109766792A/en
Publication of CN109766792A publication Critical patent/CN109766792A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

This method discloses a kind of personal identification method based on facial image, and this method is that the picture of acquisition carries out picture large scale, the pretreatment of picture storage format, and dimension of picture is made to be less than or equal to 1000*1000 pixel;Picture storage format is converted to floating number;3 layers of cascade deep network will be inputted by pretreated picture to handle, obtain the position of face and human face characteristic point;According to face and human face characteristic point position, same horizontal line is in as target using two of people, facial angle is rotated and corrected to whole picture, interception retains face part;Depth characteristic extraction is carried out to by the face part of step (3), and it is compared with face depth characteristic is prestored, and then judge the identity of current face.There is good robustness in terms of the whole-process automatic processing of the method for the present invention and multi-orientation Face processing, there is no need to people to cooperate with identification on one's own initiative, can be applied to a variety of occasions and can have concealment.

Description

A kind of personal identification method based on facial image
Technical field
The invention belongs to deep learning fields, and in particular to a kind of personal identification method based on facial image.
Background technique
The a large amount of placements, the research progress of image algorithm of public place camera are so that face recognition device is being got in recent years It is applied come more places, face recognition device is placed in fixed position for checking personnel identity mostly at present, and And need people's active posture correction to meet identification demand, therefore this face recognition device is AT STATION or vehicle-carried first-class Occasion is simultaneously impracticable.
Summary of the invention
Goal of the invention: in order to solve problems in the prior art, the present invention provide one kind be not required to very important person's active posture correction, from Dynamic detection face location simultaneously has certain robustness to posture angle, has real-time and accuracy rate high based on face figure simultaneously The personal identification method of picture.
Technical solution: the present invention provides a kind of personal identification method based on facial image, which includes following Step:
(1) picture large scale, the pretreatment of picture storage format are carried out to the picture of acquisition, is less than or equal to dimension of picture 1000*1000 pixel;Picture storage format is converted to floating number;
(2) 3 layers of cascade deep network will be inputted by pretreated picture to handle, and will obtain face and human face characteristic point Position;3 layers of cascade deep network include first order network, second level network and third level network, and described is pre- The picture of processing exports several face candidate frames by first order network processes, and second level network will be in several candidate frames Incorrect face candidate frame removal, retains the candidate frame that high probability belongs to face;Third level network retains second level network Candidate frame is screened, corrected and is merged, and the position of face and human face characteristic point is obtained;
(3) according to face and human face characteristic point position, same horizontal line is in as target, to whole picture using two of people Facial angle is rotated and is corrected, interception retains face part;
(4) to by step (3) face part carry out depth characteristic extraction, and by its with prestore face depth characteristic into Row compares, and then judges the identity of current face.
Further, the first order network include a full convolutional network, a frame return correction network and One non-maxima suppression network, wherein full convolutional network is 4 layers of full convolutional network, by the pretreated multiple dimensioned picture of step (1) Full convolutional network is inputted, after 3 layers of convolution, 2 articles of branches are separated at the 4th layer, first branch is returned by a convolutional layer and one After one changes layer, the probability that current detection frame belongs to face is obtained;Second branch exports the recurrence ginseng that frame returns correction network Number, detection block return the correction of correction network by frame and then merge what height was overlapped by non-maxima suppression network Face datection frame.
Further, the second level network includes a screening network, one side frame recurrence correction network and a non-pole Big value inhibits network;Wherein, screening network includes 3 layers of convolutional layer, 2 layers of pond layer and 2 layers of full articulamentum, 3 layers of convolutional layer and 2 Layer pond layer is arranged alternately, and separates 2 articles of branches in the 2nd layer of full articulamentum, first branch passes through a full articulamentum and a normalizing After changing layer, detection block belongs to the probability of face before obtaining, and Article 2 branch returns correction by a full articulamentum output frame Regression parameter needed for network, detection block return the correction of correction network by frame and then pass through non-maxima suppression net Network removes face candidate frame incorrect in the candidate frame, retains the candidate frame that high probability belongs to face.
Further, the third level network includes an output network, one side frame recurrence correction network and a non-pole Big value inhibits network;Wherein output network includes 4 layers of convolutional layer, 3 layers of pond layer and 2 layers of full articulamentum, 4 layers of convolutional layer and 3 Layer pond layer is arranged alternately, and separates 3 articles of branches in the 2nd layer of full articulamentum, it is general that the first branch and second branch export face respectively Rate and regression parameter, third branch export the right and left eyes of face, nose, lip left and right ends coordinate.
Further, the regression parameter includes horizontal translation parameter, vertical translation parameters, horizontal scaling parameter and erects Straight zooming parameter.
Further, whole picture is rotated in step (3) using radiation transformation, converts expression formula are as follows:
It, θ is two lines and horizontal angle, and (c1, c2) is the coordinate at two line midpoints, and (u, v) is mesh Corresponding points coordinate in logo image, (x, y) are the corresponding points coordinate in original image.
Further, in step (3) according to human face structure, by the width of the frame of face be enlarged into two eye distances after correction from 2 times, by 4 times of difference of the highly enlarged nose and two ordinates for after correction of face frame;To amplified face Frame is intercepted, the size of interception are as follows: the upper left corner abscissa of face frame is that right eye abscissa subtracts the amplified people of 0.25* The width of the frame of face, the upper left corner ordinate of face frame are the height for the frame that right eye ordinate subtracts the amplified face of 0.5*; And the face intercepted out is zoomed into identical size.
Further, step (4) extracted depth characteristic be depth network among generate 4096 dimension depth characteristics to Amount.
Further, it using cosine similarity calculation method, calculates depth characteristic vector that current face extracts and prestores It is more similar to represent two vectors closer to 1 for the included angle cosine value of face feature vector.
The utility model has the advantages that method of the invention does not need identified person's active accommodation, whole-process automatic detection processing, to multi-angle The identification of face has certain accuracy rate, enhances the practicability of method, and can make identification process identified person not It is carried out in the case where knowing, more occasions can be applied to;Meanwhile it is of the invention clear in structure, the division of labor is clear, different piece Though it is related between but relatively independent, each section can be used as individual function and go to realize;The method of invention applies multiple The deep learning network of study frontier improves whole accuracy rate, greatly improves recognition accuracy.
Detailed description of the invention
Fig. 1 is the flow chart of the personal identification method of the invention based on facial image;
Fig. 2 is the block diagram of 3 layers of cascade deep network of the invention;
Fig. 3 is the block diagram of depth network of the invention.
Specific embodiment
Below with reference to embodiment and Figure of description, the present invention is further illustrated.
The present invention provides a kind of personal identification method based on facial image, as shown in Figure 1, comprising the following steps:
Step 1: picture size, the pretreatment of picture storage format being carried out to the picture of acquisition, are less than or equal to dimension of picture 1000*1000 pixel;Picture storage format is converted to floating number.Meanwhile for specific working environment and device configuration, can add Enter the operations such as lighting process, filtering to exclude accordingly to interfere to a certain extent;
Step 2: 3 layers of cascade deep network will be inputted by pretreated picture and handled, and obtained face and face is special Levy the position of point;3 layers of cascade deep network include first order network, second level network and third level network, such as Fig. 2 Shown, data represents input layer in figure, and conv is convolutional layer, and PReLu is PReLu (Parametric Rectified Linear Unit) activation primitive, pool are pond layer, and InnerProduct is full articulamentum, and dropout is Dropout layers, and prob is Softmax layers, subsidiary number is the number of output of this layer after some layers.
The first order network includes a full convolutional network P-Net, one side frame recurrence correction network and a non-pole Big value inhibits network, wherein full convolutional network P-Net is 4 layers of full convolutional network, the multiple dimensioned picture that step 1 is obtained inputs P- Net, input data separate 2 articles of branches at the 4th layer after 3 layers of convolution, and first branch is defeated by convolutional layer generation 2 Out, respectively current detection frame belong to face and it is non-face a possibility that, 2 outputs by normalization layer (softmax layers) it Available current detection frame belongs to the probability of face afterwards.4 outputs of Article 2 branch generation, respectively horizontal translation parameter, Vertical translation parameters, horizontal scaling parameter, vertical zooming parameter, i.e. frame homing method bounding box regression Regression parameter, detection block by this regression parameter correct and then by non-maxima suppression method (NMS) merge height weight The Face datection frame of conjunction.This step can generate a large amount of face candidate frames, because the network of this step is full convolutional network, output Probability is not credible enough, needs subsequent more meticulously to be screened.
The second level network includes that a screening network R-Net, one side frame recurrence correction network and one are non-very big Value inhibits network;Wherein, screening network P-Net includes 3 layers of convolutional layer, 2 layers of pond layer and 2 layers of full articulamentum, 3 layers of convolutional layer It is arranged alternately with 2 layers of pond layer, separates 2 articles of branches in the 2nd layer of full articulamentum, first branch passes through a full articulamentum, output Current detection frame belong to face and it is non-face a possibility that, cross normalization layer (softmax layer) later available preceding detection block Belong to the probability of face, horizontal translation parameter needed for Article 2 branch exports frame homing method by a full articulamentum is erected Straight translation parameters, horizontal scaling parameter, vertical zooming parameter.
It is waited by the correction of bounding box regression, and finally by the face that NMS removes high multiplicity Frame is selected, all candidate frames that first time network is generated input the layer network, and the face probability of output has higher confidence level, A large amount of incorrect face candidate frame can be removed, the candidate frame that only a small amount of high probability belongs to face is retained, and is used as third The input of pole grade network.
The third level network includes that an output network O-Net, one side frame recurrence correction network and one are non-very big Value inhibits network;Wherein output network O-Net includes 4 layers of convolutional layer, 3 layers of pond layer and 2 layers of full articulamentum, 4 layers of convolutional layer It is arranged alternately with 3 layers of pond layer, separates 3 articles of branches in the 2nd layer of full articulamentum, the first branch and second branch export face respectively Probability and regression parameter, since deeper network allows network to extract the feature of more depth and applies more complicated Full articulamentum, exporting result than 2 networks before has higher accuracy rate.Third branch exports 5 characteristic points of face The coordinate of position, the i.e. right and left eyes, nose of people, lip left and right ends.This primary network station carries out the candidate frame of second level network Further screening, and finally also pass through bounding box regression correction and operated with NMS.The output of this network As a result with regard to the final result as Face datection part, in addition to the position of face frame, after the position of 5 human face characteristic points is used in The operation of continuous facial angle correction.
Step 3: according to face and human face characteristic point position, correcting facial angle, interception retains face part;According to face The position of characteristic point is in same horizontal line as target using two of people, rotates to whole picture;Its rotation process passes through Radiation transformation is realized, expression formula is converted are as follows:
It, θ is two lines and horizontal angle, and (c1, c2) is the coordinate at two line midpoints, and (u, v) is mesh Corresponding points coordinate in logo image, (x, y) are the corresponding points coordinate in original image.
According to human face structure, the interception of face part is carried out to postrotational picture.Specific face intercept method are as follows: will The width and height of the frame of face amplify, the multiple of amplification are as follows: the width face_w of face frame is two after correction 2 times of distance, the height face_h of face frame are 4 times of the difference of the nose and two ordinates after correction, a left side for face frame Upper angle abscissa is that right eye abscissa subtracts 0.25*face_w, and ordinate subtracts 0.5*face_h for right eye ordinate, and will The face intercepted out zooms to unified scale;
Step 4: depth characteristic extraction being carried out to the face part Jing Guo step 3, and by itself and the face that is used to compare Depth characteristic is compared, and achievees the effect that identify personnel identity, the specific steps are as follows:
Step 4.1: the face part through overcorrection being inputted into VGG-Net depth network, as shown in figure 3, conv3- in figure 64 indicate that the convolution kernel of the convolutional layer is 3*3, the number of output 64,5 sections of convolution of VGG-Net depth network, every section of volume There is 1 maximization pond layer after product, each convolutional layer uses RELU as activation primitive.It is finally 2 layers of full articulamentum, 2 The number of output of a full articulamentum is 4096;Using the 4096 dimension depth characteristic vectors generated among network as the description of face Feature vector;
Step 4.2: the depth characteristic vector that current face extracts is used to compare with what is extracted by same operation The face feature vector that prestores compare, if similarity is sufficiently high to can determine whether current face's identity.Wherein, similarity meter It calculates and uses cosine similarity calculation method, that is, calculate the cosine value of the angle of two vectors, two vectors are represented closer to 1 It is more similar.It should be noted that the feature in database for temporarily extracting in the face characteristic and scene of comparison has to pass through together The processing step of sample.Certain threshold value and the face with highest similarity be will be eventually reached as recognition result.

Claims (9)

1. a kind of personal identification method based on facial image, it is characterised in that: the recognition methods the following steps are included:
(1) picture large scale, the pretreatment of picture storage format are carried out to the picture of acquisition, dimension of picture is made to be less than or equal to 1000* 1000 pixels;Picture storage format is converted to floating number;
(2) 3 layers of cascade deep network will be inputted by pretreated picture to handle, and will obtain the position of face and human face characteristic point It sets;3 layers of cascade deep network include first order network, second level network and third level network, the pretreatment Picture export several face candidate frames by first order network processes, second level network by several candidate frames it is non-just True face candidate frame removal, retains the candidate frame that high probability belongs to face;The candidate that third level network retains second level network Frame is screened, corrected and is merged, and the position of face and human face characteristic point is obtained;
(3) according to face and human face characteristic point position, same horizontal line is in as target using two of people, whole picture is carried out Facial angle is rotated and corrects, interception retains face part;
(4) depth characteristic extraction is carried out to by the face part of step (3), and it is compared with face depth characteristic is prestored It is right, and then judge the identity of current face.
2. the personal identification method according to claim 1 based on facial image, it is characterised in that: described Primary network station includes a full convolutional network, frame recurrence correction network and a non-maxima suppression network, wherein full convolution Network is 4 layers of full convolutional network, and the pretreated multiple dimensioned picture of step (1) is inputted full convolutional network, after 3 layers of convolution, 2 articles of branches are separated at the 4th layer, first branch obtains current detection frame and belong to after a convolutional layer and a normalization layer The probability of face;Second branch exports the regression parameter that frame returns correction network, and detection block returns correction network by frame Correction and then by non-maxima suppression network merge height be overlapped Face datection frame.
3. the personal identification method based on facial image according to weighing and require 1, it is characterised in that: the second level network Correction network and a non-maxima suppression network are returned including a screening network, one side frame;Wherein, screening network includes 3 Layer convolutional layer, 2 layers of pond layer and 2 layers of full articulamentum, 3 layers of convolutional layer and 2 layers of pond layer are arranged alternately, in the 2nd layer of full connection Layer separates 2 branches, and first branch is after a full articulamentum and a normalization layer, and detection block belongs to face before obtaining Probability, regression parameter needed for Article 2 branch returns correction network by a full articulamentum output frame, detection block is by side Frame returns the correction of correction network and then removes face candidate frame incorrect in the candidate frame by non-maxima suppression network It removes, retains the candidate frame that high probability belongs to face.
4. the personal identification method based on facial image according to weighing and require 1, it is characterised in that: the third level network Correction network and a non-maxima suppression network are returned including an output network, one side frame;Wherein output network includes 4 layers Convolutional layer, 3 layers of pond layer and 2 layers of full articulamentum, 4 layers of convolutional layer and 3 layers of pond layer are arranged alternately, in the 2nd layer of full articulamentum 3 branches are separated, the first branch and second branch export face probability and regression parameter respectively, and third branch exports a left side for face Right eye, nose, lip left and right ends coordinate.
5. based on the personal identification method of facial image according to claim 2-4, it is characterised in that: the recurrence ginseng Number includes horizontal translation parameter, vertical translation parameters, horizontal scaling parameter and vertical zooming parameter.
6. the personal identification method according to claim 1 based on facial image, it is characterised in that: right in step (3) Whole picture is rotated using radiation transformation, converts expression formula are as follows:
It, θ is two lines and horizontal angle, and (c1, c2) is the coordinate at two line midpoints, and (u, v) is target figure Corresponding points coordinate as in, (x, y) are the corresponding points coordinate in original image.
7. the personal identification method according to claim 1 based on facial image, it is characterised in that: the root in step (3) According to human face structure, by the width of the frame of face be enlarged into two eye distances after correction from 2 times, highly enlarged by face frame is strong 4 times of the difference of nose and two ordinates after just;Amplified face frame is intercepted, the size of interception are as follows: people The upper left corner abscissa of face frame is the width for the frame that right eye abscissa subtracts the amplified face of 0.25*, the upper left corner of face frame Ordinate is the height for the frame that right eye ordinate subtracts the amplified face of 0.5*;And the face intercepted out is zoomed to identical Size.
8. the personal identification method according to claim 1 based on facial image, it is characterised in that: mentioned in step (4) The depth characteristic taken is the 4096 dimension depth characteristic vectors generated among depth network.
9. the personal identification method according to claim 9 based on facial image, it is characterised in that: utilize cosine similarity Calculation method calculates the depth characteristic vector that current face extracts and the included angle cosine value for prestoring face feature vector, gets over It is more similar that two vectors are represented close to 1.
CN201811589337.5A 2018-12-25 2018-12-25 A kind of personal identification method based on facial image Pending CN109766792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811589337.5A CN109766792A (en) 2018-12-25 2018-12-25 A kind of personal identification method based on facial image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811589337.5A CN109766792A (en) 2018-12-25 2018-12-25 A kind of personal identification method based on facial image

Publications (1)

Publication Number Publication Date
CN109766792A true CN109766792A (en) 2019-05-17

Family

ID=66450379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811589337.5A Pending CN109766792A (en) 2018-12-25 2018-12-25 A kind of personal identification method based on facial image

Country Status (1)

Country Link
CN (1) CN109766792A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276308A (en) * 2019-06-25 2019-09-24 上海商汤智能科技有限公司 Image processing method and device
CN112052441A (en) * 2020-08-24 2020-12-08 深圳市芯汇群微电子技术有限公司 Data decryption method of solid state disk based on face recognition and electronic equipment
CN112257693A (en) * 2020-12-22 2021-01-22 湖北亿咖通科技有限公司 Identity recognition method and equipment
CN113239900A (en) * 2021-06-17 2021-08-10 云从科技集团股份有限公司 Human body position detection method and device and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
US9530047B1 (en) * 2013-11-30 2016-12-27 Beijing Sensetime Technology Development Co., Ltd. Method and system for face image recognition
CN108304788A (en) * 2018-01-18 2018-07-20 陕西炬云信息科技有限公司 Face identification method based on deep neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
US9530047B1 (en) * 2013-11-30 2016-12-27 Beijing Sensetime Technology Development Co., Ltd. Method and system for face image recognition
CN108304788A (en) * 2018-01-18 2018-07-20 陕西炬云信息科技有限公司 Face identification method based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAIPENG ZHANG等: "Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks", 《IEEE SIGNAL PROCESSING LETTERS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276308A (en) * 2019-06-25 2019-09-24 上海商汤智能科技有限公司 Image processing method and device
CN112052441A (en) * 2020-08-24 2020-12-08 深圳市芯汇群微电子技术有限公司 Data decryption method of solid state disk based on face recognition and electronic equipment
CN112257693A (en) * 2020-12-22 2021-01-22 湖北亿咖通科技有限公司 Identity recognition method and equipment
CN113239900A (en) * 2021-06-17 2021-08-10 云从科技集团股份有限公司 Human body position detection method and device and computer readable storage medium
CN113239900B (en) * 2021-06-17 2024-06-07 云从科技集团股份有限公司 Human body position detection method, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN109740413B (en) Pedestrian re-identification method, device, computer equipment and computer storage medium
CN109766792A (en) A kind of personal identification method based on facial image
CN107832672B (en) Pedestrian re-identification method for designing multi-loss function by utilizing attitude information
CN110909690B (en) Method for detecting occluded face image based on region generation
CN105095856B (en) Face identification method is blocked based on mask
Huijuan et al. Fast image matching based-on improved SURF algorithm
CN104008370B (en) A kind of video face identification method
CN109961049A (en) Cigarette brand recognition methods under a kind of complex scene
CN109543606A (en) A kind of face identification method that attention mechanism is added
CN106327507B (en) A kind of color image conspicuousness detection method based on background and foreground information
CN105574515B (en) A kind of pedestrian recognition methods again under non-overlapping visual field
US20110019920A1 (en) Method, apparatus, and program for detecting object
CN104463877B (en) A kind of water front method for registering based on radar image Yu electronic chart information
CN110060273B (en) Remote sensing image landslide mapping method based on deep neural network
CN107066969A (en) A kind of face identification method
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
CN104102904B (en) A kind of static gesture identification method
CN108537816A (en) A kind of obvious object dividing method connecting priori with background based on super-pixel
CN110991398A (en) Gait recognition method and system based on improved gait energy map
CN107341445A (en) The panorama of pedestrian target describes method and system under monitoring scene
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
Wang et al. Study on the method of transmission line foreign body detection based on deep learning
CN107230219A (en) A kind of target person in monocular robot is found and follower method
CN110414430B (en) Pedestrian re-identification method and device based on multi-proportion fusion
Ding et al. Building detection in remote sensing image based on improved YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination