CN106204779B - Check class attendance method based on plurality of human faces data collection strategy and deep learning - Google Patents
Check class attendance method based on plurality of human faces data collection strategy and deep learning Download PDFInfo
- Publication number
- CN106204779B CN106204779B CN201610504632.0A CN201610504632A CN106204779B CN 106204779 B CN106204779 B CN 106204779B CN 201610504632 A CN201610504632 A CN 201610504632A CN 106204779 B CN106204779 B CN 106204779B
- Authority
- CN
- China
- Prior art keywords
- face
- formula
- layer
- convolution
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/10—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The check class attendance method based on plurality of human faces data collection strategy and deep learning that the invention discloses a kind of, the technical problem for solving the existing Work attendance method discrimination difference based on recognition of face.Technical solution is to carry out multi-target detection and extraction using AdaBoost algorithms and complexion model.It only needs once to shoot one section of video to all faces for participating in attendance, and the face in video sequence is detected, is extracted, complete the foundation of face database.Face identification method based on deep learning learns the face characteristic under different scenes in face database using simplified 5 models of LeNet based on 5 models of depth convolutional neural networks LeNet, converts to obtain new feature expression by multilayered nonlinear.As much as possible eliminate of these new features changes in such as illumination, noise, posture and expression class, and retain identity difference generation class between change, improve face identification method in actual complex scene human face discrimination.
Description
Technical field
The present invention relates to a kind of Work attendance methods based on recognition of face, are acquired based on plurality of human faces data more particularly to one kind
The check class attendance method of strategy and deep learning.
Background technology
Document " A Prototype of Automated Attendance System Using Image
Processing,International Journal of Advanced Research in Computer and
Communication Engineering, Vol.5, Issue 4, April2016, p501-505 " disclose a kind of based on face
The Work attendance method of identification.This method is identified the face detected using traditional Principal Component Analysis.Attendance person enters
After attendance checking system, system judges that its human face data whether there is in database, and if it exists, then directly it is identified, and
Database is added in this testing result.If being not present, need first to carry out human face data acquisition to it.The realization of this method needs
Human face data is acquired one by one to attendance person before identification.However, in actual conditions, attendance personnel amount is more, if acquiring one by one
Human face data will take considerable time that data acquisition efficiency is low.Moreover, this method requires attendance person independently to complete human face data
Acquisition, it is difficult to ensure the quality of acquired human face data.In addition, background of this method when carrying out recognition of face is simple, light
It is single according to stabilization, human face expression, however in practical attendance, attendance personnel are more, the variations such as background, illumination, posture and expression
It is extremely complex, it is traditional poor in actual complex discrimination based on the face identification method of principal component analysis.
Invention content
In order to overcome the shortcomings of that the existing Work attendance method discrimination based on recognition of face is poor, the present invention provides a kind of based on more
The check class attendance method of human face data acquisition strategies and deep learning.This method is carried out using AdaBoost algorithms and complexion model
Multi-target detection and extraction.It only needs once to shoot one section of video to all faces for participating in attendance, and to the people in video sequence
Face is detected, extracts, and completes the foundation of face database.It solves human face data acquisition in practical attendance take time and effort, is difficult
With unified the problem of acquiring, it is easier to obtain magnanimity human face data.In addition, the face identification method based on deep learning, with depth
It spends based on convolutional neural networks LeNet-5 models, using simplified LeNet-5 models under different scenes in face database
Face characteristic learnt, new character representation is obtained by the nonlinear transformation of multilayer.These new features are as much as possible
Eliminate and change in such as illumination, noise, posture and expression class, and retain identity difference generation class between change, improve people
Face recognition method is in actual complex scene human face discrimination.
The technical solution adopted by the present invention to solve the technical problems is:One kind being based on plurality of human faces data collection strategy and depth
The check class attendance method for spending study, its main feature is that including the following steps:
(a) human face data obtains.
The video sequence that 30 seconds are shot to gathered person face, during video capture, picker works out a series of rule
Then simulate the variation that face may occur in practical attendance.Including the expression shape change smiled, frowned and open one's mouth, come back, bow,
Change the action variation of face orientation.Gathered person carries out expression, action according to the requirement of picker during video capture
Variation.
(b) it carries out plurality of human faces using AdaBoost algorithm combination complexion models and detects
AdaBoost algorithms are combined with complexion model, by AdaBoost algorithm locating human faces position, then use the colour of skin
Model carries out colour of skin verification to it, and method is as follows:
1. Adaboost algorithm is utilized to generate the grader for Face datection, preliminarily Face datection is carried out.
2. using the human face regions that primarily determine of complexion model verification Adaboost, by by the pixel and standard in image
The colour of skin compares, and distinguishes the area of skin color in image and non-area of skin color.When setting standard skin color range, using three kinds of color skies
Between:Rgb color space, HSV color spaces, YCbCr color spaces.
Two RGB standard complexion models are set.The threshold range of model one:G>40, B > 20, R>G、R>B、MAX(R,G,
B)-MIN(R,G,B)>15;The threshold range of model two:R>220、|R-G|<15、R>G、R>B.
RGB color is converted to by hsv color using formula (1) (2) (3) (4), HSV standard colour of skin threshold ranges are set:0<
H<50、0.23<S<0.68。
In formula, H is tone.Wherein,
In formula, R is the value of red channel, and G is the value of green channel, and B is the value of blue channel.
In formula, S is saturation degree.
In formula, V is lightness.
RGB color is converted into YCbCr colors using formula (5), YCbCr standard colour of skin threshold ranges are arranged later is:Y
>20、135<Cr<180、85<Cb<135。
In formula, Y is luminance component, and Cb is chroma blue component, and Cr is red chrominance component.
(c) range that constraint face center occurs.
1. extracting 20 frame images from the video sequence acquired, the interval g of each interframe is calculated using formula (6)
T is the length that shot the video in formula, and f is each second frame number of captured video.
2. after being detected to the 20 frame images extracted by AdaBoost algorithms and complexion model, detection is obtained
Face coordinate is preserved, and the average value of different face centre coordinates is calculated using formula (7), (8)
In formula, (xr, yr) it is the face bottom right angular coordinate detected, (xl, yl) it is top left co-ordinate.
Face centre coordinate in the average value and real image that are calculated is compared, face centre coordinate is obtained
Error range, according to errors range add constraints
xc-m≤xc_real≤xc+m (9)
yc-n≤yc_real≤yc+n (10)
In formula, (xc_real, yc_real) it is actually detected obtained face center position coordinates.
(d) face detected and processing are extracted, the foundation of face database is completed.
The face detected is extracted, the face gray level image that size is 28 × 28 pixels is translated into.By treated
Facial image is stored by the difference of constraints, completes the foundation of practical attendance face database.
(e) training pattern.
The face gray level image that size in face database is 28 × 28 pixels is inputted into depth convolution as training data
Neural network model carries out successive ignition training, completes the training of model.Specific training process is as follows:
Training process is divided into two steps:Propagated forward and backpropagation.
1. the purpose of propagated forward is that training data is sent into network to obtain exciter response.Including one layer of convolutional layer and one
Layer down-sampling layer.
Convolutional layer is handled first, the convolution of convolutional layer l extractions is obtained using formula (11) on l layers of convolutional layer
Feature.
In formula,For the convolution feature of convolutional layer l, MjTo select the feature set of graphs of input,For the volume on convolutional layer l
Product core,For the bias on convolutional layer l, the convolution feature of convolutional layer l is obtained by activation primitive f.
After obtaining the convolution feature of convolutional layer, down-sampling processing is carried out to convolution feature using formula (12), i.e., to difference
The feature of position carries out aggregate statistics.
In formula, down () is down-sampling function, and β biases for multiplying property, and b biases for additivity.
2. backpropagation is adjusted weight and biasing by minimizing residual error.Including under one layer of convolutional layer and one layer
Sample level.
For convolutional layer, next layer is down-sampling layer, and residual error is calculated using formula (13).
In formula, up () is one and up-samples function, and corresponding element is multiplied in ο representing matrixes.
Wherein,
ul=Wlxl-1+bl (14)
xl=f (ul) (15)
In formula, f is activation primitive.
By obtained residual errorThe gradient of biasing b is calculated using formula (16).
In formula, u, v indicate the coordinate value in characteristic pattern.
DefinitionFor convolution when with by element multiplication n × n block of pixels, volume is calculated using formula (17)
Product core gradient.
The value of position (u, v) of the convolution characteristic pattern of output is the block of pixels and volume of the n × n of the position (u, v) in last layer
Product core kijBy the result of element phase.
Down-sampling layer is handled, next layer of down-sampling layer is convolutional layer, and down-sampling is calculated using formula (18)
Characteristic pattern residual error.
In formula,It indicates convolution kernel180 ° of matrix rotation, i.e., by matrix element according to diagonal line
It swaps.Conv2 is full convolution function, and the matrix vacancy that ' full ' expressions obtain full convolution is with 0 completion.
After obtaining residual error, the gradient that formula (19) calculates biasing b is reapplied.
In formula, u, v indicate the coordinate value in characteristic pattern.
DefinitionConvolution kernel gradient is calculated using the residual error application formula (20) acquired.
3. output layer classifies to feature.Its essence is a graders to solve more points using softmax graders
Class problem.
(f) trained model is tested using test data.
1. the acquisition of test data.
In, image is shot to all gathered persons.The face occurred in detection image carries out the face detected
Processing obtains the face gray level image that size is 28 × 28 pixels, and image saves as test data by treated.
2. by test data input, the test data of input is identified in trained model, model, and it is right to export its
Face label on the training set answered completes recognition of face.
(g) the convolution kernel number and learning rate by adjusting the number of plies of convolutional neural networks, per layer network carries out repeatedly real
It tests, i.e. repeatedly step (e) and step (f), and experimental result is compared, the highest model parameter of discrimination is selected, to ginseng
Number and training pattern are preserved.
The beneficial effects of the invention are as follows:This method carries out multi-target detection using AdaBoost algorithms and complexion model and carries
It takes.It only needs once to shoot one section of video to all faces for participating in attendance, and the face in video sequence is detected, is carried
It takes, completes the foundation of face database.It solves human face data acquisition in practical attendance to take time and effort, be difficult to asking for unified acquisition
Topic, it is easier to obtain magnanimity human face data.In addition, the face identification method based on deep learning, with depth convolutional neural networks
Based on LeNet-5 models, the face characteristic under different scenes in face database is carried out using simplified LeNet-5 models
Study, new character representation is obtained by the nonlinear transformation of multilayer.These new features are as much as possible to be eliminated such as illumination, makes an uproar
Variation in the classes such as sound, posture and expression, and retain and change between the class of identity difference generation, face identification method is improved in reality
Complex scene human face discrimination.
It elaborates With reference to embodiment to the present invention.
Specific implementation mode
The present invention is based on the check class attendance methods of plurality of human faces data collection strategy and deep learning to be as follows:
1, human face data obtains.
The present invention completes the acquisition of human face data by way for the treatment of picker and shooting video, and shooting obtains about 30 seconds
Video sequence.
During video capture, picker works out series of rules to simulate the change that face may occur in practical attendance
Change.Including smiling, frowning etc. and expression shape changes and open one's mouth, come back, bowing, changing the actions such as face orientation variation.Person's root to be collected
Carried out during video capture according to the instruction of picker expression, action variation.
2, plurality of human faces detection is carried out using AdaBoost algorithm combination complexion models
The present invention is combined AdaBoost algorithms with complexion model, by AdaBoost algorithm locating human faces position, then is transported
Colour of skin verification is carried out to it with complexion model, false drop rate when Face datection can be substantially reduced.Main implementation is as follows:
1. Adaboost algorithm is utilized to generate the grader for Face datection, preliminarily Face datection is carried out.
2. using the human face regions that primarily determine of complexion model verification Adaboost, by by the pixel and " mark in image
The quasi- colour of skin " compares, to distinguish area of skin color and the non-area of skin color in image.When " the standard colour of skin " range is arranged, using three
Kind color space:Rgb color space, HSV color spaces, YCbCr color spaces.
Two RGB standard complexion models are set.The threshold range of model one:G>40, B > 20, R>G、R>B、MAX(R,G,
B)-MIN(R,G,B)>15;The threshold range of model two:R>220、|R-G|<15、R>G、R>B.
RGB color is converted to by hsv color using formula (1) (2) (3) (4), HSV standard colour of skin threshold ranges are set:0<
H<50、0.23<S<0.68。
In formula, H is tone.Wherein,
In formula, R is the value of red channel, and G is the value of green channel, and B is the value of blue channel.
In formula, S is saturation degree.
In formula, V is lightness.
RGB color is converted into YCbCr colors using formula (5), YCbCr standard colour of skin threshold ranges are arranged later is:Y
>20、135<Cr<180、85<Cb<135。
In formula, Y is luminance component, and Cb is chroma blue component, and Cr is red chrominance component.
3, the range that constraint face center occurs.
In actual acquisition human face data, picker has defined to the expression of face and action, so face in video
The amplitude of variation is smaller, in this way it is easy to determine face occur range.
1. extracting 20 frame images from the video sequence acquired, the interval g of each interframe is calculated using formula (6)
T is the length that shot the video in formula, and f is each second frame number of captured video.
2. after being detected to the 20 frame images extracted by AdaBoost algorithms and complexion model, detection is obtained
Face coordinate is preserved, and the average value of different face centre coordinates is calculated using formula (7) (8)
In formula, (xr, yr) it is the face bottom right angular coordinate detected, (xl, yl) it is top left co-ordinate.
Face centre coordinate in the average value and real image that are calculated is compared, face centre coordinate is obtained
Error range, according to errors range add constraints
xc-m≤xc_real≤xc+m (9)
yc-n≤yc_real≤yc+n (10)
In formula, (xc_real, yc_real) it is actually detected obtained face center position coordinates.
4, the face detected and processing are extracted, the foundation of face database is completed.
The face that detects is extracted, the gray level image that size is the pixel of 28 pixels × 28 is translated into.By treated
Facial image is stored by the difference of constraints, completes the foundation of practical attendance face database.
5, training pattern.
Using the facial image that size in face database is the pixel of 28 pixels × 28 as training data input depth convolution
Neural network model carries out successive ignition training, completes the training of model.Specific training process is as follows:
Training process is broadly divided into two steps:Propagated forward and backpropagation.
1. the purpose of propagated forward is that training data is sent into network to obtain exciter response.Including one layer of convolutional layer and one
Layer down-sampling layer.
Convolutional layer is handled first, the convolution of convolutional layer l extractions is obtained using formula (11) on l layers of convolutional layer
Feature.
In formula,For the convolution feature of convolutional layer l, MjTo select the feature set of graphs of input,For the volume on convolutional layer l
Product core,For the bias on convolutional layer l, the convolution feature of convolutional layer l is obtained by activation primitive f.
After obtaining the convolution feature of convolutional layer, down-sampling processing is carried out to convolution feature using formula (12), i.e., to difference
The feature of position carries out aggregate statistics.Obtained summary statistics feature is compared using all obtained features of extracting not only with low
Dimension much, while can also improve result, it is not easy to over-fitting.
Down () is down-sampling function in formula, and β biases for multiplying property, and b biases for additivity.
2. backpropagation is adjusted weight and biasing by minimizing residual error.Including under one layer of convolutional layer and one layer
Sample level.
For convolutional layer, next layer is down-sampling layer, and residual error is calculated using formula (13).
In formula, up () is one and up-samples function, and corresponding element is multiplied in ο representing matrixes.
Wherein,
ul=Wlxl-1+bl (14)
xl=f (ul) (15)
In formula, f is activation primitive.
By residual error obtained aboveThe gradient that biasing b can be calculated, using formula (16).
In formula, u, v indicate the coordinate value in characteristic pattern.
DefinitionFor convolution when with by element multiplication n × n block of pixels, volume is calculated using formula (17)
Product core gradient.
The value of position (u, v) of the convolution characteristic pattern of output is the block of pixels and volume of the n × n of the position (u, v) in last layer
Product core kijBy the result of element phase.
Down-sampling layer is handled, next layer of down-sampling layer is convolutional layer, and down-sampling is calculated using formula (18)
Characteristic pattern residual error.
In formula,It indicates convolution kernel180 ° of matrix rotation, i.e., by matrix element according to diagonal line
It swaps.Conv2 is full convolution function, and the matrix vacancy that ' full ' expressions obtain full convolution is with 0 completion.
After obtaining residual error, then the gradient of biasing b is calculated, using formula (19).
In formula, u, v indicate the coordinate value in characteristic pattern.
DefinitionUsing the residual computations convolution kernel gradient acquired, using formula (20).
3. output layer classifies to feature.Its essence is a graders to solve more points using softmax graders
Class problem.
6, trained model is tested using test data.
1. the acquisition of test data
Actually class hour image is shot to all students.The face occurred in detection picture, the face detected is carried out
Processing, obtains the face gray level image that size is the pixel of 28 pixels × 28, and will treated that image saves as test data.
2. by test data input, the test data of input is identified in trained model, model, and it is right to export its
Face label on the training set answered completes recognition of face.
7, the parameters such as the convolution kernel number by adjusting the number of plies of convolutional neural networks, per layer network, learning rate carry out more
Secondary experiment, i.e. repeatedly step 5,6, and experimental result is compared, the highest model parameter of discrimination is selected, to parameter and instruction
Practice model to be preserved.
In short, the present invention proposes a kind of check class attendance method based on plurality of human faces data collection strategy and deep learning, profit
It solves the problems, such as to be difficult to obtain magnanimity human face data by acquiring one by one in practical attendance with plurality of human faces data collection strategy, greatly
The big efficiency for improving face acquisition.Meanwhile deep learning model is trained using the facial image under large amount of complex actual environment,
Model therefrom learns new feature, and as much as possible eliminate of new feature changes in such as illumination, noise, posture and expression class, and
Retain identity difference generate class between change, solve conventional face's recognition methods actual complex scene human face discrimination compared with
The problem of difference.
Claims (1)
1. a kind of check class attendance method based on plurality of human faces data collection strategy and deep learning, it is characterised in that including following step
Suddenly:
(a) human face data obtains;
The video sequence that 30 seconds are shot to gathered person face, during video capture, picker works out series of rules and comes
The variation that face may occur in the practical attendance of simulation;Including the expression shape change smiled, frowned and opens one's mouth, comes back, bows, changes
The action of face orientation changes;Gathered person carries out expression, action variation according to the requirement of picker during video capture;
(b) plurality of human faces detection is carried out using AdaBoost algorithm combination complexion models;
AdaBoost algorithms are combined with complexion model, by AdaBoost algorithm locating human faces position, then use complexion model
Colour of skin verification is carried out to it, method is as follows:
1. Adaboost algorithm is utilized to generate the grader for Face datection, preliminarily Face datection is carried out;
2. using the human face regions that primarily determine of complexion model verification Adaboost, by by the pixel and the standard colour of skin in image
Compare, distinguishes the area of skin color in image and non-area of skin color;When setting standard skin color range, using three kinds of color spaces:RGB
Color space, HSV color spaces, YCbCr color spaces;
Two RGB standard complexion models are set;The threshold range of model one:G>40, B > 20, R>G、R>B、MAX(R,G,B)-
MIN(R,G,B)>15;The threshold range of model two:R>220、|R-G|<15、R>G、R>B;
RGB color is converted to by hsv color using formula (1) (2) (3) (4), HSV standard colour of skin threshold ranges are set:0<H<
50、0.23<S<0.68;
In formula, H is tone;Wherein,
In formula, R is the value of red channel, and G is the value of green channel, and B is the value of blue channel;
In formula, S is saturation degree;
In formula, V is lightness;
RGB color is converted into YCbCr colors using formula (5), YCbCr standard colour of skin threshold ranges are arranged later is:Y>20、
135<Cr<180、85<Cb<135;
In formula, Y is luminance component, and Cb is chroma blue component, and Cr is red chrominance component;
(c) range that constraint face center occurs;
1. extracting 20 frame images from the video sequence acquired, the interval g of each interframe is calculated using formula (6)
T is the length that shot the video in formula, and f is each second frame number of captured video;
2. after being detected to the 20 frame images extracted by AdaBoost algorithms and complexion model, obtained face will be detected
Coordinate is preserved, and the average value of different face centre coordinates is calculated using formula (7), (8)
In formula, (xr, yr) it is the face bottom right angular coordinate detected, (xl, yl) it is top left co-ordinate;
Face centre coordinate in the average value and real image that are calculated is compared, the mistake of face centre coordinate is obtained
Poor range adds constraints according to errors range
xc-m≤xc_real≤xc+m (9)
yc-n≤yc_real≤yc+n (10)
In formula, (xc_real, yc_real) it is actually detected obtained face center position coordinates;
(d) face detected and processing are extracted, the foundation of face database is completed;
The face detected is extracted, the face gray level image that size is 28 × 28 pixels is translated into;It will treated face
Image is stored by the difference of constraints, completes the foundation of practical attendance face database;
(e) training pattern;
The face gray level image that size in face database is 28 × 28 pixels is inputted into depth convolutional Neural as training data
Network model carries out successive ignition training, completes the training of model;Specific training process is as follows:
Training process is divided into two steps:Propagated forward and backpropagation;
1. the purpose of propagated forward is that training data is sent into network to obtain exciter response;Including under one layer of convolutional layer and one layer
Sample level;
Convolutional layer is handled first, the convolution for obtaining convolutional layer l extractions using formula (11) on l layers of convolutional layer is special
Sign;
In formula,For the convolution feature of convolutional layer l, MjTo select the feature set of graphs of input,For the convolution kernel on convolutional layer l,For the bias on convolutional layer l, the convolution feature of convolutional layer l is obtained by activation primitive f;
After obtaining the convolution feature of convolutional layer, down-sampling processing is carried out to convolution feature using formula (12), i.e., to different location
Feature carry out aggregate statistics;
In formula, down () is down-sampling function, and β biases for multiplying property, and b biases for additivity;
2. backpropagation is adjusted weight and biasing by minimizing residual error;Including one layer of convolutional layer and one layer of down-sampling
Layer;
For convolutional layer, next layer is down-sampling layer, and residual error is calculated using formula (13);
In formula, up () is one and up-samples function, and corresponding element is multiplied in o representing matrixes;
Wherein,
ul=Wlxl-1+bl (14)
xl=f (ul) (15)
In formula, f is activation primitive;
By obtained residual errorThe gradient of biasing b is calculated using formula (16);
In formula, u, v indicate the coordinate value in characteristic pattern;
DefinitionFor convolution when with by element multiplication n × n block of pixels, convolution kernel is calculated using formula (17)
Gradient;
The value of position (u, v) of the convolution characteristic pattern of output is the block of pixels and convolution kernel of the n × n of the position (u, v) in last layer
kijBy the result of element phase;
Down-sampling layer is handled, next layer of down-sampling layer is convolutional layer, and the feature of down-sampling is calculated using formula (18)
Figure residual error;
In formula,It indicates convolution kernel180 ° of matrix rotation carries out matrix element according to diagonal line
It exchanges;Conv2 is full convolution function, and the matrix vacancy that ' full ' expressions obtain full convolution is with 0 completion;
After obtaining residual error, the gradient that formula (19) calculates biasing b is reapplied;
In formula, u, v indicate the coordinate value in characteristic pattern;
DefinitionConvolution kernel gradient is calculated using the residual error application formula (20) acquired;
3. output layer classifies to feature;Its essence is a graders to be solved more classification using softmax graders and be asked
Topic;
(f) trained model is tested using test data;
1. the acquisition of test data;
In, image is shot to all gathered persons;The face occurred in detection image, at the face detected
Reason obtains the face gray level image that size is 28 × 28 pixels, and image saves as test data by treated;
2. by test data input, the test data of input is identified in trained model, model, and it is corresponding to export its
Face label on training set completes recognition of face;
(g) the convolution kernel number and learning rate by adjusting the number of plies of convolutional neural networks, per layer network carries out many experiments,
Repeat step (e) and step (f), and experimental result compared, select the highest model parameter of discrimination, to parameter and
Training pattern is preserved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610504632.0A CN106204779B (en) | 2016-06-30 | 2016-06-30 | Check class attendance method based on plurality of human faces data collection strategy and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610504632.0A CN106204779B (en) | 2016-06-30 | 2016-06-30 | Check class attendance method based on plurality of human faces data collection strategy and deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106204779A CN106204779A (en) | 2016-12-07 |
CN106204779B true CN106204779B (en) | 2018-08-31 |
Family
ID=57462734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610504632.0A Active CN106204779B (en) | 2016-06-30 | 2016-06-30 | Check class attendance method based on plurality of human faces data collection strategy and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106204779B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778589A (en) * | 2016-12-09 | 2017-05-31 | 厦门大学 | A kind of masked method for detecting human face of robust based on modified LeNet |
CN106960185B (en) * | 2017-03-10 | 2019-10-25 | 陕西师范大学 | The Pose-varied face recognition method of linear discriminant deepness belief network |
CN107292278A (en) * | 2017-06-30 | 2017-10-24 | 哈尔滨理工大学 | A kind of face identification device and its recognition methods based on Adaboost algorithm |
CN108228872A (en) * | 2017-07-21 | 2018-06-29 | 北京市商汤科技开发有限公司 | Facial image De-weight method and device, electronic equipment, storage medium, program |
CN108073917A (en) * | 2018-01-24 | 2018-05-25 | 燕山大学 | A kind of face identification method based on convolutional neural networks |
CN108416797A (en) * | 2018-02-27 | 2018-08-17 | 鲁东大学 | A kind of method, equipment and the storage medium of detection Behavioral change |
CN108427921A (en) * | 2018-02-28 | 2018-08-21 | 辽宁科技大学 | A kind of face identification method based on convolutional neural networks |
CN108830980A (en) * | 2018-05-22 | 2018-11-16 | 重庆大学 | Security protection integral intelligent robot is received in Study of Intelligent Robot Control method, apparatus and attendance |
CN108875654B (en) * | 2018-06-25 | 2021-03-05 | 深圳云天励飞技术有限公司 | Face feature acquisition method and device |
CN109460974B (en) * | 2018-10-29 | 2021-09-07 | 广州皓云原智信息科技有限公司 | Attendance system based on gesture recognition |
CN109766813B (en) * | 2018-12-31 | 2023-04-07 | 陕西师范大学 | Dictionary learning face recognition method based on symmetric face expansion samples |
CN110313894A (en) * | 2019-04-15 | 2019-10-11 | 四川大学 | Arrhythmia cordis sorting algorithm based on convolutional neural networks |
CN110263618B (en) * | 2019-04-30 | 2023-10-20 | 创新先进技术有限公司 | Iteration method and device of nuclear body model |
CN110276263B (en) * | 2019-05-24 | 2021-05-14 | 长江大学 | Face recognition system and recognition method |
CN110728225B (en) * | 2019-10-08 | 2022-04-19 | 北京联华博创科技有限公司 | High-speed face searching method for attendance checking |
CN110852704B (en) * | 2019-10-22 | 2023-04-25 | 佛山科学技术学院 | Attendance checking method, system, equipment and medium based on dense micro face recognition |
CN111507227B (en) * | 2020-04-10 | 2023-04-18 | 南京汉韬科技有限公司 | Multi-student individual segmentation and state autonomous identification method based on deep learning |
CN111814704B (en) * | 2020-07-14 | 2021-11-26 | 陕西师范大学 | Full convolution examination room target detection method based on cascade attention and point supervision mechanism |
CN111881876B (en) * | 2020-08-06 | 2022-04-08 | 桂林电子科技大学 | Attendance checking method based on single-order anchor-free detection network |
CN113450369B (en) * | 2021-04-20 | 2023-08-04 | 广州铁路职业技术学院(广州铁路机械学校) | Classroom analysis system and method based on face recognition technology |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102223520A (en) * | 2011-04-15 | 2011-10-19 | 北京易子微科技有限公司 | Intelligent face recognition video monitoring system and implementation method thereof |
CN104573679B (en) * | 2015-02-08 | 2018-06-22 | 天津艾思科尔科技有限公司 | Face identification system based on deep learning under monitoring scene |
-
2016
- 2016-06-30 CN CN201610504632.0A patent/CN106204779B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106204779A (en) | 2016-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106204779B (en) | Check class attendance method based on plurality of human faces data collection strategy and deep learning | |
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
CN108229381B (en) | Face image generation method and device, storage medium and computer equipment | |
CN111127308B (en) | Mirror image feature rearrangement restoration method for single sample face recognition under partial shielding | |
CN107844795B (en) | Convolutional neural networks feature extracting method based on principal component analysis | |
CN107977932A (en) | It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method | |
CN109410239A (en) | A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition | |
CN109558832A (en) | A kind of human body attitude detection method, device, equipment and storage medium | |
CN108388896A (en) | A kind of licence plate recognition method based on dynamic time sequence convolutional neural networks | |
CN108229458A (en) | A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction | |
CN107463920A (en) | A kind of face identification method for eliminating partial occlusion thing and influenceing | |
CN107909556A (en) | Video image rain removing method based on convolutional neural networks | |
CN109376747A (en) | A kind of video flame detecting method based on double-current convolutional neural networks | |
CN109784148A (en) | Biopsy method and device | |
CN110348322A (en) | Human face in-vivo detection method and equipment based on multi-feature fusion | |
CN107633229A (en) | Method for detecting human face and device based on convolutional neural networks | |
CN108009493A (en) | Face anti-fraud recognition methods based on action enhancing | |
CN110263768A (en) | A kind of face identification method based on depth residual error network | |
CN109902613A (en) | A kind of human body feature extraction method based on transfer learning and image enhancement | |
CN109753864A (en) | A kind of face identification method based on caffe deep learning frame | |
CN108647689A (en) | Magic square restored method and its device based on GoogLeNet neural networks | |
CN111476727B (en) | Video motion enhancement method for face-changing video detection | |
CN114596608B (en) | Double-stream video face counterfeiting detection method and system based on multiple clues | |
CN116229528A (en) | Living body palm vein detection method, device, equipment and storage medium | |
CN110826380A (en) | Abnormal signature identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |