CN107392182A - A kind of face collection and recognition method and device based on deep learning - Google Patents

A kind of face collection and recognition method and device based on deep learning Download PDF

Info

Publication number
CN107392182A
CN107392182A CN201710705219.5A CN201710705219A CN107392182A CN 107392182 A CN107392182 A CN 107392182A CN 201710705219 A CN201710705219 A CN 201710705219A CN 107392182 A CN107392182 A CN 107392182A
Authority
CN
China
Prior art keywords
face
characteristic data
picture
mrow
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710705219.5A
Other languages
Chinese (zh)
Other versions
CN107392182B (en
Inventor
郑士查
李达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Yong Hui Intelligent Technology Co Ltd
Original Assignee
Ningbo Yong Hui Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Yong Hui Intelligent Technology Co Ltd filed Critical Ningbo Yong Hui Intelligent Technology Co Ltd
Priority to CN201710705219.5A priority Critical patent/CN107392182B/en
Publication of CN107392182A publication Critical patent/CN107392182A/en
Application granted granted Critical
Publication of CN107392182B publication Critical patent/CN107392182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention, which discloses a kind of face collection and recognition method based on deep learning and device, methods described, to be included:Step S1, collect the largely picture containing face and be processed into face file;Step S2, deep learning is carried out to the face file, establishes learning model;Step S3, the video flowing of input and the learning model are compared, the face in video is acquired;Step S4, to the face collected, face characteristic data are extracted, and this face characteristic is compared with the face characteristic data in database, be confirmed whether to be consistent;Described device includes face processing unit corresponding with step, model establishes unit, face collecting unit and face identification unit.So, face can be extracted from video flowing rapidly to be contrasted, accelerate recognition of face speed.

Description

A kind of face collection and recognition method and device based on deep learning
Technical field
The present invention relates to technical field of face recognition, and in particular to a kind of face collection and recognition method based on deep learning And device.
Background technology
Face recognition technology is a kind of biological identification technology that the facial feature information based on people carries out identification, is near A kind of brand-new biological characteristic occurred over year with the rapid progress of the technologies such as computer, image procossing, pattern-recognition is known Other technology.Image or video flowing containing face with video camera or camera collection, and automatic detect and track in the picture Face, and then the face of the people to detecting carries out a series of correlation techniques, reaches the purpose of identification different people identity.
Current face recognition technology has the weak points such as recognition of face speed is slow and recognition of face accuracy is low.
In view of drawbacks described above, creator of the present invention obtains the present invention finally by prolonged research and practice.
The content of the invention
To solve above-mentioned technological deficiency, the technical solution adopted by the present invention is, provides one kind first and is based on deep learning Face collection and recognition method, it includes:
Step S1, collect the largely picture containing face and be processed into face file;
Step S2, deep learning is carried out to the face file, establishes learning model;
Step S3, the video flowing of input and the learning model are compared, the face in video is acquired;
Step S4, to the face collected, extract face characteristic data, and by this face characteristic and database In face characteristic data be compared, be confirmed whether to be consistent.
Preferably, the face file includes the positional information of face picture, the name of picture and human face region.
Preferably, the step S1 includes:
Step S11, collect the largely picture containing face;
Step S12, carry out face and scratch figure and record the face location information of each picture, and be processed into face file.
Preferably, the step S4 includes:
Step S41, gathers the human face photo of related personnel, and extracts the face characteristic data, by the face characteristic Data preserve in the database;
Step S42, to the face in the video that collects, extract the face characteristic data;
Step S43, the face characteristic data in the face characteristic data in video and database are carried out 1:N Compare, be confirmed whether to be consistent.
Preferably, the step S41 includes:
Step S411, the human face photo of related personnel is gathered, the human face photo is zoomed in and out according to fixed size, pressed Shorten the picture of fixed size size into;
Step S412, confirm in the picture after scaling behind the position of eyes and lip, by classification and Regression to confirm The position of other characteristic points;
Step S413, according to the position of the characteristic point, extract face characteristic data and deposit in the database.
Preferably, the step S42 includes:
Step S421, to the human face photo in the video that collects, the human face photo is entered according to fixed size Row scaling, it is compressed into the picture of fixed size size;
Step S422, confirm in the picture after scaling behind the position of eyes and lip, by classification and Regression to confirm The position of other characteristic points;
Step S423, according to the position of the characteristic point, extract face characteristic data.
Preferably, the step S43 includes:
Step S431, face characteristic is cut into multiple squares, by carrying out aspect ratio pair between the square, described in confirmation The alignment similarity of square;
Step S432, the higher square of similarity is determined according to the alignment similarity;
Step S433, the alignment similarity of the higher square of each similarity of comprehensive statistics, calculate final comparison percentage Than if the comparison percentage exceedes given threshold, regarding as being consistent.
Preferably, the Regression formula is:
Wherein, x is parameter.
Secondly a kind of face collection identification based on deep learning corresponding with described face collection and recognition method is provided Device, it includes:
Face processing unit, collect the largely picture containing face and be processed into face file;
Model establishes unit, carries out deep learning to the face file, establishes learning model;
Face collecting unit, the video flowing of input and the learning model are compared, the face in video is acquired;
Face identification unit, to the face collected, extract face characteristic data, and by this face characteristic with Face characteristic data in database are compared, and are confirmed whether to be consistent.
Preferably, the face identification unit includes:
Personnel characteristics extract subelement, gather the human face photo of related personnel, and extract the face characteristic data, by institute Face characteristic data are stated to preserve in the database;
Face characteristic extracts subelement, to the face in the video that collects, extracts the face characteristic data;
Feature comparison subunit, by the face characteristic data in the face characteristic data in video and database Carry out 1:N is compared, and is confirmed whether to be consistent.
Compared with the prior art the beneficial effects of the present invention are:A kind of face collection identification based on deep learning is provided Method and device, so, face can be extracted from video flowing rapidly and be contrasted, accelerate recognition of face speed;Can be Face is identified within 0.1 second, relative to common face extraction, recognition of face speed can be greatly improved;By deep learning, It can continue uninterruptedly to be learnt, improve constantly recognition of face precision, accuracy of identification is by far above common identification technology.
Brief description of the drawings
It is required in being described below to embodiment in order to illustrate more clearly of the technical scheme in various embodiments of the present invention The accompanying drawing used is briefly described.
Fig. 1 is the flow chart of the face collection and recognition method of the invention based on deep learning;
Fig. 2 is face collection and recognition method step S1 of the present invention based on deep learning flow chart;
Fig. 3 is the position view of characteristic point in face picture of the present invention;
Fig. 4 is face collection and recognition method step S4 of the present invention based on deep learning flow chart;
Fig. 5 is face collection and recognition method step S41 of the present invention based on deep learning flow chart;
Fig. 6 is face collection and recognition method step S42 of the present invention based on deep learning flow chart;
Fig. 7 is face collection and recognition method step S43 of the present invention based on deep learning flow chart;
Fig. 8 is the structure chart of the face capturing and recognition device of the invention based on deep learning;
Fig. 9 is the structure chart of face identification unit in the face capturing and recognition device of the invention based on deep learning;
Figure 10 is the structure that personnel characteristics extract subelement in the face capturing and recognition device of the invention based on deep learning Figure;
Figure 11 is the structure of face characteristic extraction subelement in the face capturing and recognition device of the invention based on deep learning Figure;
Figure 12 is the structure chart of feature comparison subunit in the face capturing and recognition device of the invention based on deep learning.
Embodiment
Below in conjunction with accompanying drawing, the forgoing and additional technical features and advantages are described in more detail.
Current face's identification has the shortcomings that its is intrinsic as one of biological identification technology, and this is mainly manifested in:
1st, face characteristic stability is poor
Face is the three-dimensional soft skin surface for having extremely strong plasticity, can with the change of expression, age etc. and Change, the special type of skin can also change with situations such as age, cosmetic, lift face, unexpected injury.
2nd, reliability, security are low
Although everyone face is different from, the face of the mankind is generally similar, the difference between the face of many people It is not very trickle, technically realizes that safe and reliable certification has suitable difficulty.
3rd, the collection of image is easily influenceed by external condition
Face identification system must face the extremely difficult visual problems such as different illumination conditions, visual angle, distance change, this A little complicated imaging factors all can greatly influence the image quality of facial image so that recognition performance is not sufficiently stable.These are lacked Point causes what face identification system can only meet to be normally applied at present in the case where the conditions such as visual angle, illumination are more satisfactory to want Ask.
In view of this, creator of the present invention obtains the present invention finally by prolonged research and practice.
Embodiment 1
As shown in figure 1, it is the flow chart of the face collection and recognition method of the invention based on deep learning;Wherein, it is described Face collection and recognition method includes:
Step S1, collect the largely picture containing face and be processed into face file;
Step S2, deep learning is carried out to the face file, establishes learning model;
Step S3, the video flowing of input and the learning model are compared, the face in video is acquired;
Wherein, the video flowing of the input can be gathered by video camera or camera or pass through it The video that his mode obtains.
Step S4, to the face collected, extract face characteristic data, and by this face characteristic and database In face characteristic data be compared, be confirmed whether to be consistent.
So, face can be extracted from video flowing rapidly to be contrasted, accelerate recognition of face speed;Can be 0.1 Face is identified within second, relative to common face extraction, recognition of face speed can be greatly improved;, can by deep learning To continue uninterruptedly to be learnt, recognition of face precision is improved constantly, accuracy of identification is by far above common identification technology.
Embodiment 2
Face collection and recognition method based on deep learning as described above, the present embodiment are different from part and are, As shown in Fig. 2 the step S1 includes:
Step S11, collect the largely picture containing face;
In order to improve the degree of accuracy of identification, it is necessary to collect the picture that includes face of the sum not less than 10000;So, It can sufficiently be trained, and then the degree of accuracy of identification is greatly improved.
Step S12, carry out face and scratch figure and record the face location information of each picture, and be processed into face file;
The step is specially:The picture containing face is read, rectangle frame is drawn on the picture containing face and takes off face, and And the information such as face location information are preserved, and make the information of preservation associated with picture.
Wherein it is possible to which the information such as face location information are saved in XML document, then it is associated XML document and picture.
Increase face classification on the basis of the above, such as child's face and face of being grown up, so as to increase the degree of accuracy of identification.This When, it is necessary to draw rectangle frame take off face after input classification information.
The step needs content to be processed more, can write special software and be handled, so save the time, increases Processing speed.
The face file comprises at least:Sample file and processing information;The sample file is face picture;The place Reason information is face picture information corresponding to each picture;
The processing information comprises at least the name of corresponding picture and the position of human face region;So can fast fast reading Take.The size of picture, and face classification etc. can also be included, is so easy to subsequently read.
The processing information can be recorded in XML file, be so easy to preserve, change and read.
The face file also includes training file, and the training file includes the setting of the pictures of training and test File, tested and trained respectively so that training program reads specific image.
Embodiment 3
Face collection and recognition method based on deep learning as described above, the present embodiment are different from part and are, In the step S2:
Writing for learning algorithm is carried out, reads the above-mentioned face file made, carries out deep learning, establishes study mould Type.
The learning algorithm is write, and is to write program using tensorflow multilayer convolutional neural networks.
Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks, it Artificial neuron can respond the surrounding cells in a part of coverage, have outstanding performance for large-scale image procossing.Convolution Neutral net is developed recently, and causes a kind of efficient identification method paid attention to extensively.Usually, CNN basic structure Including two layers, one is characterized extract layer, and the input of each neuron is connected with the local acceptance region of preceding layer, and extracts the office The feature in portion.After the local feature is extracted, its position relationship between further feature is also decided therewith;The second is Feature Mapping layer, each computation layer of network are made up of multiple Feature Mappings, and each Feature Mapping is a plane, institute in plane There are the weights of neuron equal.Feature Mapping structure swashing as convolutional network using the small sigmoid functions of influence function core Function living so that Feature Mapping has shift invariant.Further, since the neuron on a mapping face shares weights, thus Reduce the number of network freedom parameter.Each convolutional layer followed by one in convolutional neural networks is used for asking local flat Feature resolution is reduced with the computation layer of second extraction, this distinctive structure of feature extraction twice.
The deep learning can use tensorflow storehouses, and (TensorFlow is that Google is researched and developed based on DistBelief Second generation artificial intelligence learning system, its name derives from the operation logic of itself.), multilayer convolution is established on this basis Neutral net, because TensorFlow is that GOOGLE discloses code, just do not deploy the specific steps of deep learning specifically here.
Write program using tensorflow multilayer convolutional neural networks, classification can be respectively defined as child's face with And adult's face, child and adult can correctly so be distinguished by this mode of learning, raising is other accuracy.
Embodiment 4
Face collection and recognition method based on deep learning as described above, the present embodiment are different from part and are, In the step S3:
The video flowing of input is saved as into picture format to be compared with the learning model generated, if picture The comparison success rate of certain region and model is more than threshold value, then it is assumed that this region includes face, and the face part of this photo is carried out Preserve.
The threshold value be 80%, so than pair accuracy rate it is higher.The threshold value can also be set according to actual conditions It is fixed.
It is described to compare to call Testsorflow storehouses to be compared with learning model.
Embodiment 5
Face collection and recognition method based on deep learning as described above, the present embodiment are different from part and are, As shown in figure 4, the step S4 includes:
Step S41, gathers the human face photo of related personnel, and extracts the face characteristic data, by the face characteristic Data preserve in the database;
Step S42, to the face in the video that collects, extract the face characteristic data;
Step S43, the face characteristic data in the face characteristic data in video and database are carried out 1:N Compare, be confirmed whether to be consistent.
So, accuracy of identification is improved.
Embodiment 6
Face collection and recognition method based on deep learning as described above, the present embodiment are different from part and are, As shown in figure 5, step S41 includes:
Step S411, the human face photo of related personnel is gathered, the human face photo is zoomed in and out according to fixed size, pressed Shorten the picture of fixed size size into;
So each feature locations for example, lip or eyes will occur substantially in same position;Feature is convenient for carry Take and identify.
Step S412, confirm in the picture after scaling behind the position of eyes and lip, by classification and Regression to confirm The position of other characteristic points;
Wherein, the Regression formula is:
Wherein, x is parameter.
So so that the accuracy rate of whole feature extraction is higher, judge more stable.
Object function is classification and returns loss and classify and use cross entropy, return and use the Regression formula.
Whole loss function is specially:
The position of characteristic point is as shown in Figure 3 in face picture.
Step S413, according to the position of the characteristic point, extract face characteristic data and deposit in the database.
In the face characteristic data extraction, used on last layer of feature map (feature extraction figure) of convolution The window sliding of fixed size, each window can export the feature of fixed size dimension, 9 boxs of each window to candidate Return coordinate and classification (classification here represent in box whether a class object (object), rather than specific class Not).
The face characteristic data extraction, can be the classification using Testsorflow deep learning and recurrence function Realize, it is so easy to operation.
Embodiment 7
Face collection and recognition method based on deep learning as described above, the present embodiment are different from part and are, As shown in fig. 6, step S42 includes:
Step S421, to the human face photo in the video that collects, the human face photo is entered according to fixed size Row scaling, it is compressed into the picture of fixed size size;
So each feature locations for example, lip or eyes will occur substantially in same position;Feature is convenient for carry Take and identify.
Step S422, confirm in the picture after scaling behind the position of eyes and lip, by classification and Regression to confirm The position of other characteristic points;
Wherein, the Regression formula is:
So judged more to stablize.
Object function is classification and returns loss and classify and use cross entropy, return and use the Regression formula.
Whole loss function is specially:
The position of characteristic point is as shown in Figure 3 in face picture.
Step S423, according to the position of the characteristic point, extract face characteristic data.
In the face characteristic data extraction, used on last layer of feature map (feature extraction figure) of convolution The window sliding of fixed size, each window can export the feature of fixed size dimension, 9 boxs of each window to candidate Return coordinate and classification (classification here represent in box whether a class object (object), rather than specific class Not).
The face characteristic data extraction, can be the classification using Testsorflow deep learning and recurrence function Realize, it is so easy to operation.
Embodiment 8
Face collection and recognition method based on deep learning as described above, the present embodiment are different from part and are, In step S43, due to being influenceed by light, environment, shooting angle etc., directly face characteristic data are entered with database Row 1:When N face characteristics compare, it is possible that comparing success rate than relatively low situation.
In order to solve the problem, as shown in fig. 7, step S43 includes:
Step S431, face characteristic is cut into multiple squares, by carrying out aspect ratio pair between the square, described in confirmation The alignment similarity of square;
The step, will be every in the characteristic point and database in square specifically, face characteristic is cut into multiple squares Open feature corresponding to face to be compared, confirm the similarity of each characteristic point.
The face characteristic defines multiple characteristic points, as shown in figure 3, wherein characteristic point is 67, the side of cutting Block need to include each characteristic point of face.So, missing feature point is prevented, reduces the accuracy of identification.
Aspect ratio pair is carried out between the square, is that the characteristic point in square is corresponding with every face in database Feature be compared, so compare it is with clearly defined objective, compare speed it is fast.
Step S432, the higher square of similarity is determined according to the alignment similarity;
Specially:According to the alignment similarity of each square of above-mentioned confirmation, some higher squares are found out.
So some part faces covered in can also be smoothed out comparing, it is possible to reduce because changing clothes, expression and ring The influence that border change is brought.
Wherein it is determined that the higher square of similarity comprise at least 50% characteristic point (i.e. 34 squares), similarity Threshold value is according to circumstances automatically determined by system, can also be ranked up square according to alignment similarity, 34 squares before confirmation For the higher square of similarity (can also be first 66, particular number can confirm according to actual conditions).
Step S433, the alignment similarity of the higher square of each similarity of comprehensive statistics, calculate final comparison percentage Than if the comparison percentage exceedes given threshold, regarding as being consistent.
Wherein, the computational methods are according to each squared average value of square alignment similarity ratio.So calculating side Just, accuracy of judgement.
The given threshold is 70%, can so there is higher accuracy.It can also be determined according to actual conditions.
Embodiment 9
As shown in figure 8, it is the structure chart of the face capturing and recognition device of the invention based on deep learning;Wherein, it is described Face capturing and recognition device includes:
Face processing unit 1, collect the largely picture containing face and be processed into face file;
Model establishes unit 2, carries out deep learning to the face file, establishes learning model;
Face collecting unit 3, the video flowing of input and the learning model are compared, the face in video is adopted Collection;
Wherein, the video flowing of the input can be gathered by video camera or camera or pass through it The video that his mode obtains.
Face identification unit 4, to the face collected, extract face characteristic data, and by this face characteristic It is compared with the face characteristic data in database, is confirmed whether to be consistent.
So, face can be extracted from video flowing rapidly to be contrasted, accelerate recognition of face speed;Can be 0.1 Face is identified within second, relative to common face extraction, recognition of face speed can be greatly improved;, can by deep learning To continue uninterruptedly to be learnt, recognition of face precision is improved constantly, accuracy of identification is by far above common identification technology.
Embodiment 10
Face capturing and recognition device based on deep learning as described above, the present embodiment are different from part and are, In face processing unit 1:
In order to improve the degree of accuracy of identification, it is necessary to collect the picture that includes face of the sum not less than 10000;So, It can sufficiently be trained, and then the degree of accuracy of identification is greatly improved.
In the unit, the picture containing face is read, rectangle frame is drawn on the picture containing face and takes off face, and handle The information such as face location information preserve, and make the information of preservation associated with picture.
Wherein it is possible to which the information such as face location information are saved in XML document, then it is associated XML document and picture.
Increase face classification on the basis of the above, such as child's face and face of being grown up, so as to increase the degree of accuracy of identification.This When, it is necessary to draw rectangle frame take off face after input classification information.
The unit needs content to be processed more, can write special software and be handled, so save the time, increases Processing speed.
The face file comprises at least:Sample file and processing information;The sample file is face picture;The place Reason information is face picture information corresponding to each picture;
The processing information comprises at least the name of corresponding picture and the position of human face region;So can fast fast reading Take.The size of picture, and face classification etc. can also be included, is so easy to subsequently read.
The processing information can be recorded in XML file, be so easy to preserve, change and read.
The face file also includes training file, and the training file includes the setting of the pictures of training and test File, tested and trained respectively so that training program reads specific image.
Embodiment 11
Face capturing and recognition device based on deep learning as described above, the present embodiment are different from part and are, The model is established in unit 2:
Writing for learning algorithm is carried out, reads the above-mentioned face file made, carries out deep learning, establishes study mould Type.
The learning algorithm is write, and is to write program using tensorflow multilayer convolutional neural networks.
Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks, it Artificial neuron can respond the surrounding cells in a part of coverage, have outstanding performance for large-scale image procossing.Convolution Neutral net is developed recently, and causes a kind of efficient identification method paid attention to extensively.Usually, CNN basic structure Including two layers, one is characterized extract layer, and the input of each neuron is connected with the local acceptance region of preceding layer, and extracts the office The feature in portion.After the local feature is extracted, its position relationship between further feature is also decided therewith;The second is Feature Mapping layer, each computation layer of network are made up of multiple Feature Mappings, and each Feature Mapping is a plane, institute in plane There are the weights of neuron equal.Feature Mapping structure swashing as convolutional network using the small sigmoid functions of influence function core Function living so that Feature Mapping has shift invariant.Further, since the neuron on a mapping face shares weights, thus Reduce the number of network freedom parameter.Each convolutional layer followed by one in convolutional neural networks is used for asking local flat Feature resolution is reduced with the computation layer of second extraction, this distinctive structure of feature extraction twice.
The deep learning can use tensorflow storehouses, and (TensorFlow is that Google is researched and developed based on DistBelief Second generation artificial intelligence learning system, its name derives from the operation logic of itself.), multilayer convolution is established on this basis Neutral net, because TensorFlow is that GOOGLE discloses code, just do not deploy the detailed process of deep learning specifically here.
Write program using tensorflow multilayer convolutional neural networks, classification can be respectively defined as child's face with And adult's face, child and adult can correctly so be distinguished by this mode of learning, raising is other accuracy.
Embodiment 12
Face capturing and recognition device based on deep learning as described above, the present embodiment are different from part and are, In the face collecting unit 3:
The video flowing of input is saved as into picture format to be compared with the learning model generated, if picture The comparison success rate of certain region and model is more than threshold value, then it is assumed that this region includes face, and the face part of this photo is carried out Preserve.
The threshold value be 80%, so than pair accuracy rate it is higher.The threshold value can also be set according to actual conditions It is fixed.
It is described to compare to call Testsorflow storehouses to be compared with learning model.
Embodiment 13
Face capturing and recognition device based on deep learning as described above, the present embodiment are different from part and are, As shown in figure 9, the face identification unit 4 includes:
Personnel characteristics extract subelement 41, gather the human face photo of related personnel, and extract the face characteristic data, will The face characteristic data preserve in the database;
Face characteristic extracts subelement 42, to the face in the video that collects, extracts the face characteristic data;
Feature comparison subunit 43, by the face characteristic number in the face characteristic data in video and database According to progress 1:N is compared, and is confirmed whether to be consistent.
So, accuracy of identification is improved.
Embodiment 14
Face capturing and recognition device based on deep learning as described above, the present embodiment are different from part and are, As shown in Figure 10, personnel characteristics extract subelement 41 and included:
Personnel's acquisition module 411, the human face photo of related personnel is gathered, the human face photo is carried out according to fixed size Scaling, it is compressed into the picture of fixed size size;
So each feature locations for example, lip or eyes will occur substantially in same position;Feature is convenient for carry Take and identify.
First position confirms module 412, confirms in the picture after scaling behind the position of eyes and lip, by classifying and returning Principle is returned to confirm the position of other characteristic points;
Wherein, the Regression formula is:
So judged more to stablize.
Object function is classification and returns loss and classify and use cross entropy, return and use the Regression formula.
Whole loss function is specially:
The position of characteristic point is as shown in Figure 3 in face picture.
Fisrt feature extraction module 413, according to the position of the characteristic point, extract face characteristic data and the number be present According in storehouse.
In the face characteristic data extraction, used on last layer of feature map (feature extraction figure) of convolution The window sliding of fixed size, each window can export the feature of fixed size dimension, 9 boxs of each window to candidate Return coordinate and classification (classification here represent in box whether a class object (object), rather than specific class Not).
The face characteristic data extraction, can be the classification using Testsorflow deep learning and recurrence function Realize, it is so easy to operation.
Embodiment 15
Face capturing and recognition device based on deep learning as described above, the present embodiment are different from part and are, As shown in figure 11, face characteristic extraction subelement 42 includes:
Human face data acquisition module 421, to the human face photo in the video that collects, by the human face photo according to Fixed size zooms in and out, and is compressed into the picture of fixed size size;
So each feature locations for example, lip or eyes will occur substantially in same position;Feature is convenient for carry Take and identify.
The second place confirms module 422, confirms in the picture after scaling behind the position of eyes and lip, by classifying and returning Principle is returned to confirm the position of other characteristic points;
Wherein, the Regression formula is:
So judged more to stablize.
Object function is classification and returns loss and classify and use cross entropy, return and use the Regression formula.
Whole loss function is specially:
The position of characteristic point is as shown in Figure 3 in face picture.
Second feature extraction module 423, according to the position of the characteristic point, extract face characteristic data.
In the face characteristic data extraction, used on last layer of feature map (feature extraction figure) of convolution The window sliding of fixed size, each window can export the feature of fixed size dimension, 9 boxs of each window to candidate Return coordinate and classification (classification here represent in box whether a class object (object), rather than specific class Not).
The face characteristic data extraction, can be the classification using Testsorflow deep learning and recurrence function Realize, it is so easy to operation.
Embodiment 16
Face capturing and recognition device based on deep learning as described above, the present embodiment are different from part and are, In feature comparison subunit 43, due to being influenceed by light, environment, shooting angle etc., directly by face characteristic data 1 is carried out with database:When N face characteristics compare, it is possible that comparing success rate than relatively low situation.
In order to solve the problem, as shown in figure 12, feature comparison subunit 43 includes:
Square cutting module 431, face characteristic is cut into multiple squares, by carrying out aspect ratio pair between the square, Confirm the alignment similarity of the square;
The module, will be every in the characteristic point and database in square specifically, face characteristic is cut into multiple squares Open feature corresponding to face to be compared, confirm the similarity of each characteristic point.
The face characteristic defines multiple characteristic points, as shown in figure 3, wherein characteristic point is 67, the side of cutting Block need to include each characteristic point of face.So, missing feature point is prevented, reduces the accuracy of identification.
Aspect ratio pair is carried out between the square, is that the characteristic point in square is corresponding with every face in database Feature be compared, so compare it is with clearly defined objective, compare speed it is fast.
Square determining module 432, the higher square of similarity is determined according to the alignment similarity;
Specially:According to the alignment similarity of each square of above-mentioned confirmation, some higher squares are found out.
So some part faces covered in can also be smoothed out comparing, it is possible to reduce because changing clothes, expression and ring The influence that border change is brought.
Wherein it is determined that the higher square of similarity comprise at least 50% characteristic point (i.e. 34 squares), similarity Threshold value is according to circumstances automatically determined by system, can also be ranked up square according to alignment similarity, 34 squares before confirmation For the higher square of similarity (can also be first 66, particular number can confirm according to actual conditions).
Percentage computing module 433, the alignment similarity of the higher square of each similarity of comprehensive statistics, is calculated final Percentage is compared, if the comparison percentage exceedes given threshold, regards as being consistent.
Wherein, the computational methods are according to each squared average value of square alignment similarity ratio.So calculating side Just, accuracy of judgement.
The given threshold is 70%, can so there is higher accuracy.It can also be determined according to actual conditions.
Presently preferred embodiments of the present invention is the foregoing is only, is merely illustrative for the purpose of the present invention, and it is non-limiting 's.Those skilled in the art understands, many changes can be carried out to it in the spirit and scope that the claims in the present invention are limited, Modification, in addition it is equivalent, but fall within protection scope of the present invention.

Claims (10)

  1. A kind of 1. face collection and recognition method based on deep learning, it is characterised in that including:
    Step S1, collect the largely picture containing face and be processed into face file;
    Step S2, deep learning is carried out to the face file, establishes learning model;
    Step S3, the video flowing of input and the learning model are compared, the face in video is acquired;
    Step S4, to the face collected, face characteristic data are extracted, and by this face characteristic and database Face characteristic data are compared, and are confirmed whether to be consistent.
  2. 2. face collection and recognition method as claimed in claim 1, it is characterised in that the face file include face picture, The name of picture and the positional information of human face region.
  3. 3. face collection and recognition method as claimed in claim 1, it is characterised in that the step S1 includes:
    Step S11, collect the largely picture containing face;
    Step S12, carry out face and scratch figure and record the face location information of each picture, and be processed into the face file.
  4. 4. the face collection and recognition method as described in any in claim 1-3, it is characterised in that the step S4 includes:
    Step S41, gathers the human face photo of related personnel, and extracts the face characteristic data, by the face characteristic data Preserve in the database;
    Step S42, to the face in the video that collects, extract the face characteristic data;
    Step S43, the face characteristic data in the face characteristic data in video and database are carried out 1:N ratios It is right, it is confirmed whether to be consistent.
  5. 5. face collection and recognition method as claimed in claim 4, it is characterised in that the step S41 includes:
    Step S411, the human face photo of related personnel is gathered, the human face photo is zoomed in and out according to fixed size, is compressed into The picture of fixed size size;
    Step S412, confirms in the picture after scaling behind the position of eyes and lip, confirms other by classification and Regression The position of characteristic point;
    Step S413, according to the position of the characteristic point, extract face characteristic data and store in the database.
  6. 6. face collection and recognition method as claimed in claim 4, it is characterised in that the step S42 includes:
    Step S421, to the human face photo in the video that collects, the human face photo is contracted according to fixed size Put, be compressed into the picture of fixed size size;
    Step S422, confirms in the picture after scaling behind the position of eyes and lip, confirms other by classification and Regression The position of characteristic point;
    Step S423, according to the position of the characteristic point, extract the face characteristic data.
  7. 7. face collection and recognition method as claimed in claim 4, it is characterised in that the step S43 includes:
    Step S431, face characteristic is cut into multiple squares, by carrying out aspect ratio pair between the square, confirms the square Alignment similarity;
    Step S432, the higher square of similarity is determined according to the alignment similarity;
    Step S433, the alignment similarity of the higher square of each similarity of comprehensive statistics, calculate final comparison percentage Than if the comparison percentage exceedes given threshold, regarding as being consistent.
  8. 8. face collection and recognition method as claimed in claim 4, it is characterised in that the Regression formula is:
    <mrow> <msub> <mi>Smooth</mi> <msub> <mi>L</mi> <mn>1</mn> </msub> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>0.5</mn> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>&lt;</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>-</mo> <mn>0.5</mn> </mrow> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> 1
    Wherein, x is parameter.
  9. 9. the corresponding face based on deep learning of any described face collection and recognition method is adopted in a kind of 1-8 with claim Collect identification device, it is characterised in that including:
    Face processing unit, collect the largely picture containing face and be processed into the face file;
    Model establishes unit, carries out deep learning to the face file, establishes the learning model;
    Face collecting unit, the video flowing of input and the learning model are compared, the face in video is acquired;
    Face identification unit, to the face collected, extract the face characteristic data, and by this face characteristic with Face characteristic data in database are compared, and are confirmed whether to be consistent.
  10. 10. face capturing and recognition device as claimed in claim 9, it is characterised in that the face identification unit includes:
    Personnel characteristics extract subelement, gather the human face photo of related personnel, and extract the face characteristic data, by the people Face characteristic preserves in the database;
    Face characteristic extracts subelement, to the face in the video that collects, extracts the face characteristic data;
    Feature comparison subunit, the face characteristic data in the face characteristic data in video and database are carried out 1:N is compared, and is confirmed whether to be consistent.
CN201710705219.5A 2017-08-17 2017-08-17 Face acquisition and recognition method and device based on deep learning Active CN107392182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710705219.5A CN107392182B (en) 2017-08-17 2017-08-17 Face acquisition and recognition method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710705219.5A CN107392182B (en) 2017-08-17 2017-08-17 Face acquisition and recognition method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN107392182A true CN107392182A (en) 2017-11-24
CN107392182B CN107392182B (en) 2020-12-04

Family

ID=60353143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710705219.5A Active CN107392182B (en) 2017-08-17 2017-08-17 Face acquisition and recognition method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN107392182B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388877A (en) * 2018-03-14 2018-08-10 广州影子控股股份有限公司 The recognition methods of one boar face
CN108520184A (en) * 2018-04-16 2018-09-11 成都博锐智晟科技有限公司 A kind of method and system of secret protection
CN109190442A (en) * 2018-06-26 2019-01-11 杭州雄迈集成电路技术有限公司 A kind of fast face detecting method based on depth cascade convolutional neural networks
CN109887234A (en) * 2019-03-07 2019-06-14 百度在线网络技术(北京)有限公司 A kind of children loss prevention method, apparatus, electronic equipment and storage medium
CN110807859A (en) * 2019-10-31 2020-02-18 上海工程技术大学 Subway pedestrian flow dynamic monitoring and high-precision gate identification system
CN110895663A (en) * 2018-09-12 2020-03-20 杭州海康威视数字技术股份有限公司 Two-wheel vehicle identification method and device, electronic equipment and monitoring system
CN111079720A (en) * 2020-01-20 2020-04-28 杭州英歌智达科技有限公司 Face recognition method based on cluster analysis and autonomous relearning
CN111428683A (en) * 2020-04-13 2020-07-17 北京计算机技术及应用研究所 Web front-end image synthesis method based on tensiorflow
CN111860047A (en) * 2019-04-26 2020-10-30 美澳视界(厦门)智能科技有限公司 Face rapid identification method based on deep learning
JP2021504214A (en) * 2018-10-19 2021-02-15 シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド Driving environment intelligent adjustment, driver registration method and equipment, vehicles and devices

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573679A (en) * 2015-02-08 2015-04-29 天津艾思科尔科技有限公司 Deep learning-based face recognition system in monitoring scene
CN104636730A (en) * 2015-02-10 2015-05-20 北京信息科技大学 Method and device for face verification
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105447441A (en) * 2015-03-19 2016-03-30 北京天诚盛业科技有限公司 Face authentication method and device
CN105808709A (en) * 2016-03-04 2016-07-27 北京智慧眼科技股份有限公司 Quick retrieval method and device of face recognition
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN106295501A (en) * 2016-07-22 2017-01-04 中国科学院自动化研究所 The degree of depth based on lip movement study personal identification method
CN106951846A (en) * 2017-03-09 2017-07-14 广东中安金狮科创有限公司 A kind of face 3D models typing and recognition methods and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573679A (en) * 2015-02-08 2015-04-29 天津艾思科尔科技有限公司 Deep learning-based face recognition system in monitoring scene
CN104636730A (en) * 2015-02-10 2015-05-20 北京信息科技大学 Method and device for face verification
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN105447441A (en) * 2015-03-19 2016-03-30 北京天诚盛业科技有限公司 Face authentication method and device
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105808709A (en) * 2016-03-04 2016-07-27 北京智慧眼科技股份有限公司 Quick retrieval method and device of face recognition
CN106295501A (en) * 2016-07-22 2017-01-04 中国科学院自动化研究所 The degree of depth based on lip movement study personal identification method
CN106951846A (en) * 2017-03-09 2017-07-14 广东中安金狮科创有限公司 A kind of face 3D models typing and recognition methods and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
野孩子1991: "faster rcnn的源码理解(一)SmoothL1LossLayer论文与代码的结合理解", 《网页在线公开:HTTPS://BLOG.CSDN.NET/U010668907/ARTICLE/DETAILS/51456928》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388877A (en) * 2018-03-14 2018-08-10 广州影子控股股份有限公司 The recognition methods of one boar face
CN108520184A (en) * 2018-04-16 2018-09-11 成都博锐智晟科技有限公司 A kind of method and system of secret protection
CN109190442A (en) * 2018-06-26 2019-01-11 杭州雄迈集成电路技术有限公司 A kind of fast face detecting method based on depth cascade convolutional neural networks
CN109190442B (en) * 2018-06-26 2021-07-06 杭州雄迈集成电路技术股份有限公司 Rapid face detection method based on deep cascade convolution neural network
CN110895663A (en) * 2018-09-12 2020-03-20 杭州海康威视数字技术股份有限公司 Two-wheel vehicle identification method and device, electronic equipment and monitoring system
JP2021504214A (en) * 2018-10-19 2021-02-15 シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド Driving environment intelligent adjustment, driver registration method and equipment, vehicles and devices
CN109887234A (en) * 2019-03-07 2019-06-14 百度在线网络技术(北京)有限公司 A kind of children loss prevention method, apparatus, electronic equipment and storage medium
CN111860047A (en) * 2019-04-26 2020-10-30 美澳视界(厦门)智能科技有限公司 Face rapid identification method based on deep learning
CN111860047B (en) * 2019-04-26 2024-06-11 美澳视界(厦门)智能科技有限公司 Face rapid recognition method based on deep learning
CN110807859A (en) * 2019-10-31 2020-02-18 上海工程技术大学 Subway pedestrian flow dynamic monitoring and high-precision gate identification system
CN111079720A (en) * 2020-01-20 2020-04-28 杭州英歌智达科技有限公司 Face recognition method based on cluster analysis and autonomous relearning
CN111428683A (en) * 2020-04-13 2020-07-17 北京计算机技术及应用研究所 Web front-end image synthesis method based on tensiorflow

Also Published As

Publication number Publication date
CN107392182B (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN107392182A (en) A kind of face collection and recognition method and device based on deep learning
CN104143079B (en) The method and system of face character identification
CN100397410C (en) Method and device for distinguishing face expression based on video frequency
CN109522853B (en) Face datection and searching method towards monitor video
Fouhey et al. People watching: Human actions as a cue for single view geometry
Asif et al. Privacy preserving human fall detection using video data
CN102682309B (en) Face feature registering method and device based on template learning
CN112784763A (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN108960076B (en) Ear recognition and tracking method based on convolutional neural network
CN113963445A (en) Pedestrian falling action recognition method and device based on attitude estimation
CN110163117B (en) Pedestrian re-identification method based on self-excitation discriminant feature learning
CN104112114A (en) Identity verification method and device
WO2019153175A1 (en) Machine learning-based occluded face recognition system and method, and storage medium
Hsu et al. Hierarchical Network for Facial Palsy Detection.
CN107808376A (en) A kind of detection method of raising one&#39;s hand based on deep learning
CN110263768A (en) A kind of face identification method based on depth residual error network
CN104143076A (en) Matching method and system for face shape
CN111488943A (en) Face recognition method and device
CN109063643A (en) A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part
CN103106414A (en) Detecting method of passer-bys in intelligent video surveillance
WO2019091988A1 (en) Change-aware person identification
Chen et al. Human posture recognition based on skeleton data
Echoukairi et al. Improved Methods for Automatic Facial Expression Recognition.
Bai et al. Exploration of computer vision and image processing technology based on OpenCV
CN115937971B (en) Method and device for identifying hand-lifting voting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant