CN109344731A - The face identification method of lightweight neural network based - Google Patents
The face identification method of lightweight neural network based Download PDFInfo
- Publication number
- CN109344731A CN109344731A CN201811049087.6A CN201811049087A CN109344731A CN 109344731 A CN109344731 A CN 109344731A CN 201811049087 A CN201811049087 A CN 201811049087A CN 109344731 A CN109344731 A CN 109344731A
- Authority
- CN
- China
- Prior art keywords
- face
- weight
- lightweight
- network model
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention discloses a kind of face identification methods of lightweight neural network based, belong to technical field of face recognition.It is excessive in order to solve model present in the existing face identification system based on deep learning, the technical problems such as speed is slow, the present invention passes through the optimization to face characteristic extraction network, have compressed the size of deep neural network model involved in recognition of face processing, under the premise of losing less or few system accuracy, accelerate recognition of face processing speed.The recognition methods of lightweight of the invention can be deployed in small mobile terminals such as raspberry pie, on the small devices such as single-chip microcontroller, can be applied to access control system, supermarket's member registration management system, in examinee's identity authentication management system.
Description
Technical field
The invention belongs to technical field of face recognition, and in particular to a kind of recognition of face lightweight neural network based is real
Existing method.
Background technique
Recognition of face is the identification method based on biological characteristic, is possessed using mankind itself, and can uniquely be indicated
The physiological characteristic or behavioural characteristic of its identity carry out the technology of authentication.With human-computer interaction, the proposition of intelligent city's concept,
Face recognition technology has very important significance.As one of the main approaches of pattern-recognition and machine learning field,
There are a large amount of face recognition algorithms to be suggested.
In recent years, deep neural network achieves many substantial progresses in image classification and identification field, and in hand
Digital identification, Car license recognition are write, Face datection field presents its excellent performance, achieves fabulous recognition effect.
However, having model excessively huge based on the face identification system that deep neural network is realized, inner parameter is excessive
The shortcomings that, these weight parameters can consume a large amount of calculating and storage resource, some common depth minds for recognition of face
Through network, model is often above several hundred million, this makes it be difficult to be deployed in mobile device.
Summary of the invention
Goal of the invention of the invention is: for model present in the existing face identification system based on deep learning
Excessive, speed is slow etc., and technical problems, this hair provide a kind of face identification method of lightweight neural network based, Ke Yi
Substantially under the premise of not losing system accuracy, compact model size as far as possible accelerates recognition of face speed.
The face identification method of lightweight neural network based of the invention includes the following steps:
Construct human-face detector step:
Using depth nerve convolutional network model as Face datection network model, including face characteristic extracts network, Quan Lian
Connect layer and non-maxima suppression layer;Training the Face datection network model when, to the face characteristic extract network model into
The first light-weight technologg of row;Using trained Face datection network model as lightweight human-face detector;Wherein, the first light weight
Changing processing includes that model parameter beta pruning and model parameter quantify;Network is extracted to face characteristic and carries out model parameter quantization, it is right
Full articulamentum carries out model parameter beta pruning;
Construct face feature extractor step:
Using depth nerve convolutional network model as facial feature extraction network model, and deep learning instruction is carried out to it
Practice, when training, the second light-weight technologg is carried out to the facial feature extraction network model;By trained facial feature extraction
Network model is as lightweight face feature extractor;Wherein, the second light-weight technologg includes model parameter quantization;
Construct face recognition database's step:
After the photo of same people's different angle is carried out dimension normalization processing, then input lightweight facial feature extraction
Device obtains corresponding facial characteristics vector and is stored in face recognition database;Size and lightweight face after wherein normalizing
The input of feature extractor matches;
Human face region detecting step:
After carrying out size normalized to image to be identified, lightweight human-face detector is inputted, from figure to be identified
The presence of face is detected as in, and isolates human face region image;Size and lightweight Face datection after wherein normalizing
The input of device matches;
Recognition of face processing step:
After the human face region image that human face region detecting step obtains is normalized, input lightweight face is special
Extractor is levied, facial characteristics vector to be identified is obtained;Size and lightweight face feature extractor after wherein normalizing
Input matches;
Facial characteristics vector to be identified is compared with the facial characteristics vector in face recognition database again, take away from
From nearest comparison result as face recognition result;
The model parameter beta pruning and model parameter quantization, concrete mode are as follows:
(a) model parameter beta pruning:
Calculate each neuron a of full articulamentumiWith the weight b of upper one layer of each neuronikBetween correlation coefficient rik:Wherein i is the specificator of the neuron of full articulamentum, and k is weight specificator,Point
It Biao Shi not neuron ai, weight bikMean value,Respectively indicate neuron ai, weight bikVariance;
Based on related coefficient construct indicate beta pruning mask code matrix: respectively in positive and negative relevant related coefficient sample S ×
K+、S×K-A related coefficient, then the mask parameter of the corresponding weight index of the relationship number sampled is set as activating, and other
The mask parameter of position be set to it is inactive, obtain indicate beta pruning mask code matrix;
Wherein S indicates preset degree of rarefication, and value range is 0 < S < 1, K+Indicate positively related related coefficient quantity, K-Table
Show negatively correlated related coefficient quantity;
Sample mode specifically: respectively by all positive and negative relevant correlation coefficient rsikDescending arrangement, before being then divided into
Two parts afterwards, stochastical sampling λ × S × K in front portion*A, stochastical sampling goes out (1- λ) × S × K in rear portion*It is a;
Wherein λ indicates that preset weights, value are 0~1, K*Indicate positive or negative relevant related coefficient quantity;
(b) model parameter quantifies:
The weight for inputting neuron to m carries out clustering processing, and obtains all kinds of cluster centres, and wherein classification number is equal to
Output neuron number n;
And gradient quantification treatment is carried out to m weight, obtain the weight w after gradient quantizationg, then by mutually similar wg's
Accumulated value obtains all kinds of reduction amount Δ w;
All kinds of cluster centres is subtracted into all kinds of reduction amount Δ w respectively, obtains the quantized value of all kinds of weights;And by same class
The weight value of other weight is disposed as the quantized value of such weight.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are: the present invention passes through to face spy
Sign extracts the optimization of network, the size of deep neural network model involved in recognition of face processing is had compressed, substantially not
Under the premise of losing system accuracy, accelerate recognition of face processing speed.The recognition methods of lightweight of the invention can be deployed in small
Type mobile terminal such as raspberry pie on the small devices such as single-chip microcontroller, can be applied to access control system, supermarket's member registration management
System, in examinee's identity authentication management system.
Detailed description of the invention
Fig. 1 is lightweight recognition of face treatment process schematic diagram of the invention;
Fig. 2 is Face datection model schematic;
Weight light-weight technologg process schematic Fig. 3 of the invention.
Fig. 4 is existing Fire module structural schematic diagram;
Fig. 5 is the training schematic diagram of facial feature extraction network model;
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below with reference to embodiment and attached drawing, to this hair
It is bright to be described in further detail.
The present invention has compressed depth mind involved in recognition of face processing by the optimization to face characteristic extraction network
Size through network model accelerates recognition of face processing speed under the premise of not losing system accuracy substantially.Referring to Fig. 1,
Key step includes:
S1: size normalization is carried out (preferably having a size of 300* to original image (facial image under natural scene)
300) face characteristic extractor is sent into after handling, face characteristic extractor extracts the face characteristic for human face region detection, so
Afterwards through SSD (single shot multibox detector, the more box detectors of single) human-face detector from original image
It detects the presence of face, and isolates human face region;
S2: the human face region image isolated is pre-processed, normalized facial image is obtained;
S3: facial feature extraction is carried out to facial image obtained in step (2) using lightweight face feature extractor;
S4: Knn (K nearest neighbor algorithm) classification processing is carried out to facial characteristics, realizes recognition of face.
Specifically, each step is specific as follows:
Step S1 specific implementation are as follows:
Primitive image features are extracted using neural network, then are sent in SSD human-face detector.
It is preferred that MobileNet neural network extract primitive image features (face characteristic), MobileNet network be it is a by
Efficient, the vision mode of small size of Google's publication, is intended to make full use of the effective money of mobile device and Embedded Application
Useful feature in original image is extracted in source, and multiple characteristic layers are sent into SSD (single shot multibox
Detector, the more box detectors of single) Face datection is carried out in model.The present invention is by using MobileNet as extraction people
The basic network of face detection block feature, improves the processing speed of model.
SSD is the method for realizing Target detection and identification using single deep neural network model, which combines
The bounding box thought and YOLO (You Only of Faster-RCNN (convolutional neural networks based on target area faster)
Look Once) single Neural detection thinking, therefore the accuracy rate of existing Faster-RCNN has the detection of YOLO again
Speed, the present invention carry out Face datection to image using SSD human-face detector, detect human face region.Referring to fig. 2, entire people
The network model of face detection includes MobileNet, additional features layer (SSD_1~SSD_5) and full articulamentum and non-maximum
Inhibition layer.
Wherein, MobileNet is used for the basic network of feature extraction, extracts primitive image features;
Additional features layer generates feature for assisting, by using convolution (conv) karyogenesis different size of different sizes
Characteristic layer.In the present embodiment, the dimension of the extracted each feature vector of MobileNet is 512, and each feature vector is corresponding
Image-region be 38 × 38;Wherein additional features layer SSD_1~SSD_4 respectively includes two convolutional layers: the decomposable volume of depth
Product (DSC layers) and common convolutional layer (conv layers), specifically: SSD_1 layers, DSC-3 × 3 × 1024 are first passed through, then pass through conv-
1×1×1024;SSD_2 layers, conv-1 × 1 × 256 are first passed through, then pass through DSC-3 × 3 × 512;It SSD_3 layers, first passes through
Conv-1 × 1 × 128, then pass through DSC-3 × 3 × 256;SSD_4 layers, first pass through conv-1 × 1 × 128, then by DSC-3 ×
3×256;And additional features layer includes two common convolutional layers, that is, first passes through conv-1 × 1 × 128, then passes through conv-1 × 1
×256.Wherein, a × b in convolutional layer × c indicates the size of convolution kernel.The corresponding image-region of SSD_1~SSD_5 is successively
Are as follows: 19 × 19,10 × 10,5 × 5,3 × 3,1 × 1.
The output of SSD human-face detector is bounding box (Face datection frame) and each side of a series of fixed sizes
A possibility that including face in boundary's frame (detection confidence level), non-maxima suppression layer is then the confidence level based on each face detection block
Non-maxima suppression (Non-Maximum Suppression) processing is carried out to all face detection blocks, to obtain final
Face datection frame, i.e. human face region.
By carrying out deep learning training to above-mentioned network model, when meeting preset detection accuracy, then can be used for
The SSD human-face detector of Face datection.In training, the first light-weight technologg (model parameter is carried out to the network model of use
Beta pruning and model parameter quantization).In present embodiment, it is trained to obtain on FDDB Face datection data set.
The step S2 is specifically included:
S201: trimming operation is carried out to the image in human face region, obtains facial image;
S202: rotation correction is carried out to facial image;
S203: being normalized facial image, preferably having a size of 112*112 pixel.
Step S3 is specifically included:
First choice selection facial feature extraction network model, is then trained it, by trained facial feature extraction
Network model is as face feature extractor.
It needs to carry out the second light-weight technologg (mould to selected facial feature extraction network model during training
Shape parameter quantization).
Model parameter beta pruning and model parameter quantization in light-weight technologg in the present invention, concrete mode are as follows:
(a) model parameter beta pruning.
Since the last layer of network, beta pruning is carried out to the layer according to certain rule, then re -training network, and
Recycle the above process.The implementation method of beta pruning is exactly to increase by a mask parameter for weight, the place only activated in mask parameter
It is only 1, remaining is 0 entirely.
Beta pruning criterion: since full articulamentum almost occupies the parameter amount of neural network 90%, the present invention is only to people
The full articulamentum of face feature extraction network model carries out beta pruning.Assuming that aiIt is a neuron of current layer, upper one layer has K mind
Through member, therefore there is K weight parameter: bi1,bi2…biK.Then the present invention can calculate a according to the following formulaiWith each bikCorrelation
Coefficient:
Wherein u and δ is the mean value and variance obtained on verifying collection respectively, and subscript is used to distinguish the neuron of current layer
aiAnd the weight b of upper one layer of each neuronik。
The mask code matrix for indicating beta pruning is constructed based on related coefficient:
First by all positively related correlation coefficient rsikDescending arrangement, is then divided into two parts, random in front portion
Sample (λ × S × K+) a, go out ((1- λ) × S × K in a part of stochastical sampling below+) a;Wherein λ indicates default power
Weight, value are 0~1, are set as 0.75 in present embodiment;S indicates preset degree of rarefication, and value range is 0 < S < 1;K+
Indicate positively related related coefficient quantity;
Same sample mode is taken to negative correlation, obtains S × K-A sampled result, wherein K-Indicate negatively correlated correlation
Number of coefficients;
S × K that sampling is obtained+A positive correlation coefficient, S × K-The mask ginseng of the corresponding weight index of a negative correlation coefficient
Number is set as activation (mask parameter is set as 1), and the mask parameter of other positions is set to that inactive (mask parameter is set as
0), to obtain indicating the mask code matrix of beta pruning.
The mode of above-mentioned beta pruning improves many, explanation than the method accuracy of the common amplitude size beta pruning according to parameter
The amplitude of weight can not indicate the importance of weight well.
(b) model parameter quantifies:
By being quantified to weight, come further compression network, (quantization can reduce to be indicated used in data this part
Digit).
Referring to Fig. 3, detailed process that the present invention quantifies weight are as follows:
If input neuron has m, output neuron is also n (m and n are not necessarily equal), assume in this example m and
N is 4, then weight is 4x4, and same gradient is also.
In present embodiment, quantization weight is 4 ranks, is indicated in Fig. 3 with 4 kinds of different greyscale colors.To just only need
Store the index of 4 code words and 16 2bit.
Firstly, carrying out clustering processing to m weight, and all kinds of cluster centres is obtained, wherein classification number is equal to output mind
Through first number n, as shown in the first row in Fig. 3;
Then gradient quantification treatment is carried out to m weight, the weight after gradient quantization is obtained, such as the second row in Fig. 3
Left figure shown in;All kinds of reduction amounts are obtained by mutually similar accumulated value again.
All kinds of class reduction amount for ease of calculation, in present embodiment, first will it is mutually similar by rows, it is then defeated
All kinds of reduction amount out, as shown in the second row of Fig. 3;
Finally, all kinds of cluster centres is subtracted all kinds of reduction amounts respectively, the quantized value of all kinds of weights, i.e. same class are obtained
The weight value of other weight is the quantized value of such weight.
Before not up to expectation detection accuracy, continue to train based on the weight after quantization, quantized value is carried out more
Newly.
I.e. in the present invention, each batch (batch) training is completed after obtaining the updated weight of iteration, passes through parameter beta pruning
And/or model parameter quantization carries out light-weight technologg to it;Continued again by next batch to the weight after current lightweight
Parameter is iterated update processing, such iterative cycles, until meeting detection accuracy.Model parameter quantization in preferred order be
3 ranks.
In present embodiment, preferred facial feature extraction network model are as follows: in studying Zeilier&Fergus
The convolutional layer of the 3*3 in neural network ZFnet used replaces with Fire module structure, to reduce parameter amount.Wherein
Neural network ZFnet used in Zeilier&Fergus research specifically refers to document " Visualizing and
Understanding Convolutional Networks ", Fire module structure is as shown in figure 4, i.e. will be shown in Fig. 4
3*3 in network structure in the network layer replacement neural network ZFnet of removal input (Input) and output layer (Output) removal
Convolutional layer.By facial feature extraction network model employed in present embodiment, the face that 128 dimensions can be obtained is special
Levy vector.
In training, facial characteristics is extracted by selected depth nerve convolutional network to the facial image of present lot
Information, then L2 Regularization is carried out to it, the facial characteristics for obtaining 128 dimensions indicates, and uses and judge whether to reach detection
The loss function of precision is the three sub- loss functions (Triplet Loss) of company.For by the facial image of an individual and other people
Face image separates, as shown in Figure 5.In present embodiment, the data set used when training is LFW (Labeled Faces
In the Wild) human face data collection.
Step S4 is specifically included:
Construct face recognition database:
After the photo of same people's different angle is carried out dimension normalization processing, then input light-weighted facial feature extraction
Device obtains corresponding facial characteristics vector and is stored in face recognition database;
The facial characteristics vector of current object to be identified and the facial characteristics vector in face recognition database are compared
Compared with taking apart from nearest comparison result as face recognition result.
In present embodiment, the characteristic distance between facial characteristics vector is calculated using L2 distance, characteristic distance is most
Face in short face recognition database is the recognition result of current object to be identified.
The above description is merely a specific embodiment, any feature disclosed in this specification, except non-specifically
Narration, can be replaced by other alternative features that are equivalent or have similar purpose;Disclosed all features or all sides
Method or in the process the step of, other than mutually exclusive feature and/or step, can be combined in any way.
Claims (3)
1. the face identification method of lightweight neural network based, characterized in that it comprises the following steps:
Construct human-face detector step:
Using depth nerve convolutional network model as Face datection network model, including face characteristic extracts network, full articulamentum
With non-maxima suppression layer;When the training Face datection network model, network model is extracted to the face characteristic and carries out the
One light-weight technologg;Using trained Face datection network model as lightweight human-face detector;Wherein, at the first lightweight
Reason includes that model parameter beta pruning and model parameter quantify;
Construct face feature extractor step:
Using depth nerve convolutional network model as facial feature extraction network model, and deep learning training, instruction are carried out to it
When practicing, the second light-weight technologg is carried out to the facial feature extraction network model;By trained facial feature extraction network
Model is as lightweight face feature extractor;Wherein, the second light-weight technologg includes model parameter quantization;
Construct face recognition database's step:
After the photo of same people's different angle is carried out dimension normalization processing, then lightweight face feature extractor is inputted, obtained
To corresponding facial characteristics vector and it is stored in face recognition database;Size after wherein normalizing is mentioned with lightweight facial characteristics
The input of device is taken to match;
Human face region detecting step:
After carrying out size normalized to image to be identified, lightweight human-face detector is inputted, from image to be identified
It detects the presence of face, and isolates human face region image;Size and lightweight human-face detector after wherein normalizing
Input matches;
Recognition of face processing step:
After the human face region image that human face region detecting step obtains is normalized, input lightweight facial characteristics is mentioned
Device is taken, facial characteristics vector to be identified is obtained;The input of size and lightweight face feature extractor after wherein normalizing
Match;
Facial characteristics vector to be identified is compared with the facial characteristics vector in face recognition database again, takes distance most
Close comparison result is as face recognition result;
The model parameter beta pruning and model parameter quantization, concrete mode are as follows:
(a) model parameter beta pruning:
Calculate each neuron a of full articulamentumiWith the weight b of upper one layer of each neuronikBetween correlation coefficient rik:Wherein i is the specificator of the neuron of full articulamentum, and k is weight specificator,Point
It Biao Shi not neuron ai, weight bikMean value,Respectively indicate neuron ai, weight bikVariance;
The mask code matrix for indicating beta pruning is constructed based on related coefficient: sampling S × K in positive and negative relevant related coefficient respectively+、S
×K-A related coefficient, then the mask parameter of the corresponding weight index of the relationship number sampled is set as activating, and other positions
The mask parameter set be set to it is inactive, obtain indicate beta pruning mask code matrix;
Wherein S indicates preset degree of rarefication, and value range is 0 < S < 1, K+Indicate positively related related coefficient quantity, K-Indicate negative
Relevant related coefficient quantity;
Sample mode specifically: respectively by all positive and negative relevant correlation coefficient rsikDescending arrangement, is then divided into front and back two
Part, stochastical sampling λ × S × K in front portion*A, stochastical sampling goes out (1- λ) × S × K in rear portion*It is a;Wherein λ
Indicate that preset weights, value are 0~1, K*Indicate positive or negative relevant related coefficient quantity;
(b) model parameter quantifies:
The weight for inputting neuron to m carries out clustering processing, and obtains all kinds of cluster centres, and wherein classification number is equal to output
Neuron number n;
And gradient quantification treatment is carried out to m weight, obtain the weight w after gradient quantizationg, then by mutually similar wgIt is cumulative
Value obtains all kinds of reduction amount Δ w;
All kinds of cluster centres is subtracted into all kinds of reduction amount Δ w respectively, obtains the quantized value of all kinds of weights;And it will be same category of
The weight value of weight is disposed as the quantized value of such weight.
2. the method as described in claim 1, which is characterized in that the preferred value of weight λ is 0.75.
3. the method as described in claim 1, which is characterized in that the preferred value of degree of rarefication S is 0.3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811049087.6A CN109344731B (en) | 2018-09-10 | 2018-09-10 | Lightweight face recognition method based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811049087.6A CN109344731B (en) | 2018-09-10 | 2018-09-10 | Lightweight face recognition method based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109344731A true CN109344731A (en) | 2019-02-15 |
CN109344731B CN109344731B (en) | 2022-05-03 |
Family
ID=65305058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811049087.6A Active CN109344731B (en) | 2018-09-10 | 2018-09-10 | Lightweight face recognition method based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344731B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348423A (en) * | 2019-07-19 | 2019-10-18 | 西安电子科技大学 | A kind of real-time face detection method based on deep learning |
CN110348357A (en) * | 2019-07-03 | 2019-10-18 | 昆明理工大学 | A kind of fast target detection method based on depth convolutional neural networks |
CN110363137A (en) * | 2019-07-12 | 2019-10-22 | 创新奇智(广州)科技有限公司 | Face datection Optimized model, method, system and its electronic equipment |
CN110414419A (en) * | 2019-07-25 | 2019-11-05 | 四川长虹电器股份有限公司 | A kind of posture detecting system and method based on mobile terminal viewer |
CN111091559A (en) * | 2019-12-17 | 2020-05-01 | 山东大学齐鲁医院 | Depth learning-based auxiliary diagnosis system for small intestine sub-scope lymphoma |
CN111401360A (en) * | 2020-03-02 | 2020-07-10 | 杭州雄迈集成电路技术股份有限公司 | Method and system for optimizing license plate detection model and license plate detection method and system |
CN111488806A (en) * | 2020-03-25 | 2020-08-04 | 天津大学 | Multi-scale face recognition method based on parallel branch neural network |
CN111582377A (en) * | 2020-05-09 | 2020-08-25 | 济南浪潮高新科技投资发展有限公司 | Edge end target detection method and system based on model compression |
CN111639619A (en) * | 2020-06-08 | 2020-09-08 | 金陵科技学院 | Face recognition device and recognition method based on deep learning |
CN113221908A (en) * | 2021-06-04 | 2021-08-06 | 深圳龙岗智能视听研究院 | Digital identification method and equipment based on deep convolutional neural network |
CN113255576A (en) * | 2021-06-18 | 2021-08-13 | 第六镜科技(北京)有限公司 | Face recognition method and device |
CN113536824A (en) * | 2020-04-13 | 2021-10-22 | 南京行者易智能交通科技有限公司 | Improvement method of passenger detection model based on YOLOv3 and model training method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016118257A1 (en) * | 2015-01-22 | 2016-07-28 | Qualcomm Incorporated | Model compression and fine-tuning |
CN106682650A (en) * | 2017-01-26 | 2017-05-17 | 北京中科神探科技有限公司 | Mobile terminal face recognition method and system based on technology of embedded deep learning |
CN106778918A (en) * | 2017-01-22 | 2017-05-31 | 北京飞搜科技有限公司 | A kind of deep learning image identification system and implementation method for being applied to mobile phone terminal |
CN107944555A (en) * | 2017-12-07 | 2018-04-20 | 广州华多网络科技有限公司 | Method, storage device and the terminal that neutral net is compressed and accelerated |
CN108229442A (en) * | 2018-02-07 | 2018-06-29 | 西南科技大学 | Face fast and stable detection method in image sequence based on MS-KCF |
CN108496174A (en) * | 2015-10-28 | 2018-09-04 | 北京市商汤科技开发有限公司 | method and system for face recognition |
-
2018
- 2018-09-10 CN CN201811049087.6A patent/CN109344731B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016118257A1 (en) * | 2015-01-22 | 2016-07-28 | Qualcomm Incorporated | Model compression and fine-tuning |
CN108496174A (en) * | 2015-10-28 | 2018-09-04 | 北京市商汤科技开发有限公司 | method and system for face recognition |
CN106778918A (en) * | 2017-01-22 | 2017-05-31 | 北京飞搜科技有限公司 | A kind of deep learning image identification system and implementation method for being applied to mobile phone terminal |
CN106682650A (en) * | 2017-01-26 | 2017-05-17 | 北京中科神探科技有限公司 | Mobile terminal face recognition method and system based on technology of embedded deep learning |
CN107944555A (en) * | 2017-12-07 | 2018-04-20 | 广州华多网络科技有限公司 | Method, storage device and the terminal that neutral net is compressed and accelerated |
CN108229442A (en) * | 2018-02-07 | 2018-06-29 | 西南科技大学 | Face fast and stable detection method in image sequence based on MS-KCF |
Non-Patent Citations (1)
Title |
---|
曹文龙等: "神经网络模型压缩方法综述", 《计算机应用研究》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348357B (en) * | 2019-07-03 | 2022-10-11 | 昆明理工大学 | Rapid target detection method based on deep convolutional neural network |
CN110348357A (en) * | 2019-07-03 | 2019-10-18 | 昆明理工大学 | A kind of fast target detection method based on depth convolutional neural networks |
CN110363137A (en) * | 2019-07-12 | 2019-10-22 | 创新奇智(广州)科技有限公司 | Face datection Optimized model, method, system and its electronic equipment |
CN110348423A (en) * | 2019-07-19 | 2019-10-18 | 西安电子科技大学 | A kind of real-time face detection method based on deep learning |
CN110414419A (en) * | 2019-07-25 | 2019-11-05 | 四川长虹电器股份有限公司 | A kind of posture detecting system and method based on mobile terminal viewer |
CN111091559A (en) * | 2019-12-17 | 2020-05-01 | 山东大学齐鲁医院 | Depth learning-based auxiliary diagnosis system for small intestine sub-scope lymphoma |
CN111401360A (en) * | 2020-03-02 | 2020-07-10 | 杭州雄迈集成电路技术股份有限公司 | Method and system for optimizing license plate detection model and license plate detection method and system |
CN111488806A (en) * | 2020-03-25 | 2020-08-04 | 天津大学 | Multi-scale face recognition method based on parallel branch neural network |
CN113536824A (en) * | 2020-04-13 | 2021-10-22 | 南京行者易智能交通科技有限公司 | Improvement method of passenger detection model based on YOLOv3 and model training method |
CN113536824B (en) * | 2020-04-13 | 2024-01-12 | 南京行者易智能交通科技有限公司 | Improved method of passenger detection model based on YOLOv3 and model training method |
CN111582377A (en) * | 2020-05-09 | 2020-08-25 | 济南浪潮高新科技投资发展有限公司 | Edge end target detection method and system based on model compression |
CN111639619A (en) * | 2020-06-08 | 2020-09-08 | 金陵科技学院 | Face recognition device and recognition method based on deep learning |
CN111639619B (en) * | 2020-06-08 | 2024-01-30 | 金陵科技学院 | Face recognition device and method based on deep learning |
CN113221908A (en) * | 2021-06-04 | 2021-08-06 | 深圳龙岗智能视听研究院 | Digital identification method and equipment based on deep convolutional neural network |
CN113221908B (en) * | 2021-06-04 | 2024-04-16 | 深圳龙岗智能视听研究院 | Digital identification method and device based on deep convolutional neural network |
CN113255576A (en) * | 2021-06-18 | 2021-08-13 | 第六镜科技(北京)有限公司 | Face recognition method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109344731B (en) | 2022-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109344731A (en) | The face identification method of lightweight neural network based | |
CN111898547B (en) | Training method, device, equipment and storage medium of face recognition model | |
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
Zhang et al. | Waste image classification based on transfer learning and convolutional neural network | |
Chang et al. | Unsupervised transfer learning via multi-scale convolutional sparse coding for biomedical applications | |
CN110363183B (en) | Service robot visual image privacy protection method based on generating type countermeasure network | |
Huang et al. | Deep embedding network for clustering | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN105975932B (en) | Gait Recognition classification method based on time series shapelet | |
CN111274916A (en) | Face recognition method and face recognition device | |
Navaneethan et al. | The Human Eye Pupil Detection System Using BAT Optimized Deep Learning Architecture. | |
WO2015070764A1 (en) | Face positioning method and device | |
CN108921019A (en) | A kind of gait recognition method based on GEI and TripletLoss-DenseNet | |
Ying et al. | Human ear recognition based on deep convolutional neural network | |
CN109815920A (en) | Gesture identification method based on convolutional neural networks and confrontation convolutional neural networks | |
CN110096991A (en) | A kind of sign Language Recognition Method based on convolutional neural networks | |
CN116071668A (en) | Unmanned aerial vehicle aerial image target detection method based on multi-scale feature fusion | |
CN107832753A (en) | A kind of face feature extraction method based on four value weights and multiple classification | |
CN117523208B (en) | Identity recognition method and system based on image semantic segmentation and classification | |
CN110516615A (en) | Human and vehicle shunting control method based on convolutional neural networks | |
Peng et al. | A survey: Image classification models based on convolutional neural networks | |
Chato et al. | Application of machine learning to biometric systems-A survey | |
CN116524352A (en) | Remote sensing image water body extraction method and device | |
CN116246305A (en) | Pedestrian retrieval method based on hybrid component transformation network | |
CN111898479B (en) | Mask wearing recognition method and device based on full convolution single-step target detection algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |