CN106991408A - The generation method and method for detecting human face of a kind of candidate frame generation network - Google Patents

The generation method and method for detecting human face of a kind of candidate frame generation network Download PDF

Info

Publication number
CN106991408A
CN106991408A CN201710242833.2A CN201710242833A CN106991408A CN 106991408 A CN106991408 A CN 106991408A CN 201710242833 A CN201710242833 A CN 201710242833A CN 106991408 A CN106991408 A CN 106991408A
Authority
CN
China
Prior art keywords
candidate frame
network
face
training
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710242833.2A
Other languages
Chinese (zh)
Inventor
段翰聪
赵子天
邹涵江
文慧
张帆
闵革勇
孙振兴
陈绍斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710242833.2A priority Critical patent/CN106991408A/en
Publication of CN106991408A publication Critical patent/CN106991408A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of generation method of candidate frame generation network and method for detecting human face, its generation method comprises the following steps, and A, picture is inputted into network, through convolution plus pondization operation, obtains a characteristic pattern;B, each point in characteristic pattern mapped back in the receptive field of artwork, and the point on the basis of the point, a number of candidate frame is produced according to candidate frame area and candidate frame area scaling;C, the division that positive negative sample is carried out to candidate frame;Multiple candidate frames for being produced in D, stochastical sampling step C optimize loss function.Using candidate frame generation network generation candidate frame, its amount of calculation is small.

Description

The generation method and method for detecting human face of a kind of candidate frame generation network
Technical field
The present invention relates to method for detecting human face field, and in particular to a kind of candidate frame generates the generation method of network and is based on The candidate frame generates the method for detecting human face of network.
Background technology
Face datection is one of problem of most study in computer vision field, is not only due to the people in object detection Face detection is challenging, more due to having countless application programs, it is necessary to Face datection as the first step.Face datection is appointed The target of business is, to any given piece image or one group of image sequence, to be judged automatically using machine in the image or the sequence It whether there is, and depositing in the context of a person's face, find out its position and size.Face datection problem is generally conceptualized as one two Classification problem, that is, distinguish face with it is non-face.
Facial image, with showing huge difference in shape, and in real scene, more due to illumination, hides in its performance The factors such as gear, angle, posture, add the difficulty of Face datection.Classical method for detecting human face mainly by based on face and Non-face image pattern Study strategies and methods, then generate candidate frame on the image of input, reuse the data trained Grader is practised to classify to each candidate window.Continuing to develop and using with deep neural network, the net of various structures Network model is used for Face datection, however, the low grader of model complexity lacks enough modeling abilities, model complexity is high Grader then have high computing cost.In actual application program, because video and image are needed in limited calculating money Handled in real time under source, many methods are difficult to the performance while having been obtained in accuracy and speed at present.
Analyze the flow of Face datection, it is possible to find the calculating classification of a large amount of candidate frames is cause high computing cost one Reason.It there is now the generation method of two kinds of face candidate frames:A kind of is the candidate frame generation technique based on sliding window, Yi Zhongshi Candidate frame generation technique based on region, but both conventional methods the problem of all have certain in actual applications.The former About 10 can be produced when in face of multiple dimensioned Face datection6To 107Individual candidate frame, it is quite time-consuming;Although the latter can only produce several Thousand or even hundreds of candidate frames, but be required for being put into convolutional neural networks based on each candidate frame that this method is produced Calculated, need to carry out hundreds and thousands of forward calculations equivalent to a pictures, it is same quite time-consuming.
The content of the invention
In order to solve the above-mentioned technical problem the present invention provides the generation method and Face datection of a kind of candidate frame generation network Method.
The present invention is achieved through the following technical solutions:
A kind of candidate frame generates the generation method of network, comprises the following steps,
A, picture inputted into network, operated through convolution plus pondization, obtain a characteristic pattern;
B, each point in characteristic pattern mapped back in the receptive field of artwork, and the point on the basis of the point, according to candidate Frame area and candidate frame area scaling produce a number of candidate frame;
C, the division that positive negative sample is carried out to candidate frame;
Multiple candidate frames for being produced in D, stochastical sampling step C optimize loss function.
Preferably, mapping relations are in step B:pi=si*pi+1, in formula, piRepresent certain of i-th layer of layer input It is individual, siRepresent i-th layer of layer step-length.
Preferably, the specific operating process of step C is:Calculate candidate frame with training set each spotting it is overlapping Rate, and according to the division of the positive negative sample of Duplication progress:To each spotting in training set, the maximum time in region is overlapped Frame is selected to be designated as positive sample;For remaining candidate frame, if it is more than upper limit threshold with some spotting Duplication, it is designated as just Sample candidate frame;If being both less than lower threshold with the Duplication of any one spotting, negative sample candidate frame is designated as;For Above-mentioned remaining candidate frame and the candidate frame across original image border, are discarded;The upper limit threshold is more than lower limit threshold Value.
Further, the computational methods of Duplication are the common factor of candidate frame and spotting than upper candidate frame and spotting Union.
Preferably, the loss function includesWith
Wherein, LiRepresent the Classification Loss of i-th of sample;fjRepresent scores of the sample i on classification j;yiRepresent sample i True classification;tiIt is the coordinate of the frame of sample i prediction,It is sample i Ground Truth coordinate.
A kind of method for detecting human face, comprises the following steps:
Obtain picture to be detected;
Picture to be detected is inputted in the Face datection network after training, the Face datection network includes above-mentioned candidate Frame generates network and face classification network;
Candidate frame generation network is produced on picture to be detected may for face region Proposals;
Face classification network for the region Proposals of face to that may classify;
Output category is the frame of face and the coordinate of recurrence, and is marked.
Preferably, the training method of the Face datection network is:
(1) face location, is marked out in the picture of face image data collection, training set is made, wherein, the size of picture It is equal;
(2) training set, is input to Face datection network, model training is carried out, and adjusting parameter reaches that instruction is realized in convergence Practice, wherein when training, the Proposals quantity produced per pictures is identical;If training the Proposals produced quantity to be more than Setting value, then give up unnecessary Proposals;If training the Proposals produced quantity to be less than setting value, insufficient section leads to Generation negative sample is crossed as Proposals to supplement.
The present invention compared with prior art, has the following advantages and advantages:
1st, network is generated using the candidate frame of the method for the present invention and face classification network is detected to face, it will be waited The generation of frame is selected to be put into convolutional neural networks, such forward calculation can be obtained by candidate frame, while convolutional Neural net Network can obtain picture feature by oneself study, have more distinction relative to manual feature, can further reduce candidate The generation quantity of frame, so as to effectively reduce amount of calculation.
2nd, the parallel method of use of the invention, while accuracy of detection is ensured, can effectively improve speed.
Embodiment
For the object, technical solutions and advantages of the present invention are more clearly understood, with reference to embodiment, to present invention work Further to describe in detail, exemplary embodiment and its explanation of the invention is only used for explaining the present invention, is not intended as to this The restriction of invention.
Embodiment 1
A kind of candidate frame generates the generation method of network, comprises the following steps,
A, the picture input network by arbitrary size, operate through convolution plus pondization, obtain a characteristic pattern Feature Map, this feature figure is subsequently used for making the selection of candidate frame;
B, the input that the characteristic pattern obtained in step A is used as to PGN_data layers, each point in characteristic pattern is mapped In the receptive field for returning artwork, and the point on the basis of the point, according to candidate frame area area and candidate frame area scaling ratio Produce a number of candidate frame;
C, the division that positive negative sample is carried out to candidate frame;
D, randomly select multiple candidate frames for producing in step C to optimize loss function Loss Function.
The main purpose of step A, B is to produce Feature Map and the point in Feature Map is reverted into original graph In piece.In step D and whole candidate frames are not used, if the optimization that all candidate frames are used for Loss Function can be led Cause negative sample excessive, cause final model to align sample predictions accuracy rate extremely low.
The number for the candidate frame randomly selected in the step D is any, still, excellent for the ease of GPU calculating Choosing is set to even number, and for example 64,128,256,300.Found in experimental study, and if only if sets the number of candidate frame For 256 when, it not only can reach a higher accuracy of detection and can reach higher detection speed, be accuracy of detection and inspection The optimal balance point for degree of testing the speed.If the quantity of candidate frame is too small, accuracy of detection can be affected, and quantity is excessive, can influence Detection speed.
Candidate frame generation network is Proposal Generate Network, abbreviation PGN.The time trained using this method It is a full convolutional neural networks to select frame to generate network, normalizes layer, Chi Hua including convolutional layer, Relu layers, local acknowledgement successively Layer etc..
In PGN training process, characteristic pattern Feature Map be PGN_Cls_Score output, in order to obtain it with The mapping relations of coordinate points in artwork, the padding of each convolutional layer and pond layer is both configured to Coordinate mapping equation can abbreviation be:
Work as kiDuring for odd number:
So coordinate mapping equation abbreviation is:
pi=si*pi+1
Work as kiDuring for even number:
So coordinate formula abbreviation is:
pi=si*pi+1- 0.5,
Due to piFor coordinate value, it is impossible to take decimal, therefore no matter kiDuring for odd number or even number formula all can abbreviation be:
pi=si*pi+1
It is upper it is various in, piRepresent some point, s of i-th layer of layer inputiRepresent i-th layer of layer step-length, kiRepresent Size the kernel size, padding of convolution kerneliRepresent i-th layer of layer filling size.
Now, the coordinate of receptive field central point is only related to each layer of step-length, can effectively simplify amount of calculation.
The division of positive negative sample is determined according to candidate frame with the Duplication of each spotting in training set in step C. Upper limit threshold is more than lower threshold.The Duplication of candidate frame and each spotting Groud True Box in training set is calculated, To each spotting in training set, overlap the maximum candidate frame in region and be designated as positive sample, to ensure each Groud True Box can at least correspond to a positive sample candidate frame;For remaining candidate frame, if it and some spotting weight Folded rate is more than upper limit threshold, is designated as positive sample candidate frame, this means that this each Ground True Box can correspond to multiple positive samples This candidate frame, but each positive sample candidate frame can only correspond to a Ground True Box;If with any one spotting Duplication is both less than lower threshold, then is designated as negative sample candidate frame;For above-mentioned remaining candidate frame and cross over original image The candidate frame on border, is discarded.
The occurrence of upper limit threshold and lower threshold can be set as the case may be.Different Face datection algorithms are right Different boundaries may be taken during positive and negative sample classification.It is preferred that, in this method, we using empirical value by upper limit threshold, Lower threshold is respectively set to 0.7,0.3.Using above-mentioned threshold value, aligning negative sample has preferable separating capacity, the model of training There is preferable performance when face of classifying is non-face.
Wherein, Duplication Intersection Over Union are that IOU is a ginseng in target detection appraisement system Number, its computational methods are union of the common factor than upper candidate frame and spotting of candidate frame and spotting, i.e.,:
Duplication IOU is object detection results Detection Result and the true frames of Ground True Box common factor Than their upper union.Candidate frame herein is considered as preliminary testing result, so Detection Result herein It is equal to candidate frame.
Loss function in step D uses the multitask loss function in Fast RCNN.Candidate frame generation network includes two Generic task, one is the positive and negative division result for exporting candidate frame, and two be the positional information for returning candidate frame, corresponding loss function bag Two kinds of functions are included, a kind of is the Classification Loss for exporting the positive and negative division result of candidate frame, it is another to be lost to return.
First generic task is using conventional Softmax loss functions, and the loss of i-th of sample is as follows: Wherein, fjRepresent scores of the sample i on classification j, yiRepresent sample i true classification.
Second generic task uses SmoothL1 loss function, is defined as follows: Wherein,tiIt is the coordinate of the frame of prediction;It is the coordinate of Ground Truth coordinate, i.e. actual value.
Embodiment 2
Network is generated based on the candidate frame that embodiment 1 is generated, the present embodiment discloses a method for detecting human face.
A kind of method for detecting human face, comprises the following steps:
Obtain picture to be detected;
Picture to be detected is inputted in the Face datection network after training, the Face datection network includes face classification net The candidate frame generation network that network and embodiment 1 are generated;
Candidate frame generation network is produced on picture to be detected may for face region Proposals;
Face classification network for the region Proposals of face to that may classify;
Output category is the frame of face and the coordinate of recurrence, and is marked.
Technical solution of the present invention both uses the side of sliding window in Face datection unlike conventional face's detection algorithm Formula produces a large amount of candidate frames, and the algorithm of the candidate frame generation technique based on region is also used unlike, and each candidate frame is equal Incoming deep neural network is calculated, and consumes the plenty of time.The generation of candidate frame is put into convolutional Neural by the method for this programme In network, such forward calculation can be obtained by candidate frame, while convolutional neural networks can be obtained by oneself study Picture feature, has more distinction relative to manual feature such as sift etc., so can further reduce the generation number of candidate frame Amount, so as to reduce amount of calculation.Classified and returned using the structure of multitask, it is possible to use extra information is current to improve The learning performance of some task, including improve the intelligibility of extensive accuracy rate, pace of learning and model.
Consider real application scenarios, it is per second in such as monitor video of camera collection to have many frame pictures, it is desirable to Disposably multiple frame pictures are transmitted into Face datection network obtains result, fully to utilize computing resource, improves detection speed, Rather than traditionally handle a sheet by a sheet picture successively.In order to realize its concurrency, the speed of Face datection is improved, in above-mentioned steps In optimize.In training, to realize Batch, that is, the picture of a collection of suitable number is inputted to network training, need to solve one Individual problem is which pictures in the Proposals correspondence inputs of generation how known.This programme is adopted with the following method:
The training method of Face datection network is:
(1) face location, is marked out in the picture of face image data collection, training set is made, wherein, the size of picture It is equal;
(2) training set, is input to Face datection network, model training is carried out, and adjusting parameter reaches that instruction is realized in convergence Practice, wherein when training, the Proposals quantity produced per pictures is identical;If training the Proposals produced quantity to be more than Setting value, then give up unnecessary Proposals;If training the Proposals produced quantity to be less than setting value, insufficient section leads to Generation negative sample is crossed as Proposals to supplement.Picture can constantly produce proposals during training, and it is unnecessary to give up here Proposals refers to that the part exceeded is just directly given up when the proposals produced quantity reaches setting value.
Its principle is picture size unification in 1, training set;2nd, the Proposals produced per pictures in training is fixed For same number, i.e., by setting an adequate value after experiment, all pictures are allowed to produce the Proposals of identical quantity;If the 3, Train the Proposals produced quantity to be more than setting value, then give up unnecessary Proposals;What if training was produced Proposals quantity is less than setting value, and insufficient section is used as Proposals by generating negative sample and supplemented.
The training method that the present invention is provided can easily train the multitask network model of a shared convolution, and Batch solution can realize one or more picture to be detected and be handled while being input to Face datection model, from And computing resource is made full use of, improve concurrency.Because the calculating based on deep neural network all can be accelerated by GPU, This Face datection network optimizes computing cost again, introduces Batch, therefore technical solution of the present invention can ensure accuracy of detection Meanwhile, effectively improve speed.
Shallower network be in the Face datection ability under complex background it is not enough, for example illumination difference, plurality of human faces, Under the backgrounds such as small face, therefore a hierarchical structure slightly deep network must be introduced it is used for last candidate frame and classify.The present embodiment In face classification network Face Classify Net be that FCN includes convolutional layer, pond layer, full articulamentum etc., can refer to VGGNet thoughts.VGGNet models are directed to many classification problems, so there is the network structure of 16 layer depths, but for Face datection this Plant for two classification problems, consumption is too big the time required to too deep network detection.Face Classify Net are directed to this problem, VGGNet is done from network depth, convolution kernel size and number, the use of normalization layer and optimized.Face classification network FCN will Convolutional layer in VGGNet cut 7 layers, only retain 5 layers.Reduction the time required to the reduction meaning forward calculation of network layer, But the decline in accuracy of detection is also implied that simultaneously.So we lift the size number of convolution kernel in remaining 5 layers of convolutional network, Specifically the convolution kernel size of first layer and the second layer is lifted to 7*7 and 5*5 respectively from 3*3, and the quantity of convolution kernel is then every One layer double, so as to reduce influence of the reduction to neural network accuracy of network layer.It can be introduced to make network preferably restrain simultaneously Batch Normalization tackle the change of data distribution.
How above-mentioned face classification network and candidate frame generation network are put together, train many of shared convolution Task Network model, here we have proposed two kinds of training methods, a kind of is the training method of multistep loop iteration, and one kind is end To the training method at end.
For first way, the training process of actually one continuous iteration, since Face is respectively trained Classify Network and Proposal Generate Network may allow network to be restrained towards different directions, then I Can first stand-alone training Proposal Generate Network, then with Proposal Generate Network net Network weight is initialized to Face Classify Network and Proposal Generate Network are defeated before The candidate frame gone out trains Face Classify Network as now Face Classify Network input.Use Face Classify Network network parameter deinitialization Face Classify Network.This process of continuous iteration afterwards, That is circuit training Face Classify Network and Proposal Generate Network.Here 4 steps are we illustrated The training method of loop iteration, its idiographic flow is as follows:
The first step:Use one Proposal Generate Network of training dataset stand-alone training;
Second step:Using first step Proposal Generate Network produce candidate frame as new training set, A Face Classify Network network is trained, so far, the parameter of two each layer of networks is not shared completely;
3rd step:New using the Face Classify Network network parameters initialization one of second step Proposal Generate Network networks, but Proposal Generate Network, Face Classify The learning rate of those shared Network convolutional layers is set to 0, that is, does not update, and only updates Proposal Generate Those distinctive Internets of Network, re -training, now, two networks have shared all public convolutional layers;
4th step:Still fixed those shared Internets, the distinctive networks of Proposal Generate Network Layer also adds, and forms a unified network, continues to train, fine setting Face Classify Network are distinctive Internet, now, the network have been carried out it is contemplated that target, i.e., network internal predicting candidate frame and realize the work(of detection Energy.
Although multistep loop iteration instruction can make Proposal Generate Network and Face Classify Network shares the parameter of convolutional layer, but they are separated into repetitive exercise and also results in additionally opening on their training times Pin.In this regard, we illustrating second of training method:Train end to end.
Two kinds of networks in Face datection model proposed by the present invention, they have used identical level knot above several layers of Structure, so we are trained by both fusions of realization so as to reach end-to-endly.
Network share after fusion Proposal Generate Network and Face Classify Network from Internet between Conv1 to PGN_Cls_Score, and increased Proposal Layer and Proposal newly in a network Layer layers of Target, remaining Internet then each belongs to Proposal Generate Network and Face Classify Network。
The candidate frame that the present invention is produced Proposal Generate Network by Proposal Layer is from artwork In cut, as directly inputting for Face Classify Network, so as to very simply realize the fusion of two models. It may be noted that abnormal fusion proposed by the present invention is simple, but it is based primarily upon at following 2 points:First, pass through Proposal Generate The generation of candidate frame has been integrated into network by Network;Second, Proposal Generate Network and Face Classify Network are designed using similar network layer.
Above-described embodiment, has been carried out further to the purpose of the present invention, technical scheme and beneficial effect Describe in detail, should be understood that the embodiment that the foregoing is only the present invention, be not intended to limit the present invention Protection domain, within the spirit and principles of the invention, any modification, equivalent substitution and improvements done etc. all should be included Within protection scope of the present invention.

Claims (9)

1. a kind of candidate frame generates the generation method of network, it is characterised in that comprise the following steps,
A, picture inputted into network, operated through convolution plus pondization, obtain a characteristic pattern;
B, each point in characteristic pattern mapped back in the receptive field of artwork, and the point on the basis of the point, according to candidate frame face Product and candidate frame area scaling produce a number of candidate frame;
C, the division that positive negative sample is carried out to candidate frame;
D, randomly select multiple candidate frames for producing in step C to optimize loss function.
2. a kind of candidate frame according to claim 1 generates the generation method of network, it is characterised in that mapped in step B Relation is:pi=si*pi+1, in formula, piRepresent some point, s of i-th layer of layer inputiRepresent i-th layer of layer step-length.
3. a kind of candidate frame according to claim 1 generates the generation method of network, it is characterised in that the step C tools Body is the Duplication for calculating candidate frame and each spotting in training set, and according to the division of the positive negative sample of Duplication progress: To each spotting in training set, overlap the maximum candidate frame in region and be designated as positive sample;For remaining candidate frame, such as Really it is more than upper limit threshold with some spotting Duplication, is designated as positive sample candidate frame;If with any one spotting Duplication is both less than lower threshold, then is designated as negative sample candidate frame;For above-mentioned remaining candidate frame and cross over original image The candidate frame on border, is discarded;The upper limit threshold is more than lower threshold.
4. a kind of candidate frame according to claim 3 generates the generation method of network, it is characterised in that the Duplication Computational methods are union of the common factor than upper candidate frame and spotting of candidate frame and spotting.
5. a kind of candidate frame according to claim 1 generates the generation method of network, it is characterised in that the loss function IncludingWithWherein, LiRepresent i-th sample Classification Loss;fjRepresent scores of the sample i on classification j;yiRepresent sample i true classification;tiIt is sample i The coordinate of the frame of prediction;It is sample i Ground Truth coordinate.
6. a kind of candidate frame according to claim 1 generates the generation method of network, it is characterised in that in the step D The number for the candidate frame randomly selected is even number.
7. a kind of candidate frame according to claim 6 generates the generation method of network, it is characterised in that in the step D The number for the candidate frame randomly selected is 256.
8. a kind of method for detecting human face, it is characterised in that:Comprise the following steps:
Obtain picture to be detected;
Picture to be detected is inputted in the Face datection network after training, the Face datection network includes claim 1 to 7 institute The candidate frame generation network and face classification network stated;
Candidate frame generation network is produced on picture to be detected may for face region Proposals;
Face classification network for the region Proposals of face to that may classify;
Output category is the frame of face and the coordinate of recurrence, and is marked.
9. a kind of method for detecting human face according to claim 8, it is characterised in that the training side of the Face datection network Method is:
(1), mark out face location in the picture of face image data collection, make training set, wherein, picture it is equal in magnitude;
(2) training set, is input to Face datection network, model training is carried out, and adjusting parameter reaches that training is realized in convergence, its During middle training, the Proposals quantity produced per pictures is identical;If training the Proposals produced quantity to be more than setting Value, then give up unnecessary Proposals;If training the Proposals produced quantity to be less than setting value, insufficient section passes through life Supplemented into negative sample as Proposals.
CN201710242833.2A 2017-04-14 2017-04-14 The generation method and method for detecting human face of a kind of candidate frame generation network Pending CN106991408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710242833.2A CN106991408A (en) 2017-04-14 2017-04-14 The generation method and method for detecting human face of a kind of candidate frame generation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710242833.2A CN106991408A (en) 2017-04-14 2017-04-14 The generation method and method for detecting human face of a kind of candidate frame generation network

Publications (1)

Publication Number Publication Date
CN106991408A true CN106991408A (en) 2017-07-28

Family

ID=59416226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710242833.2A Pending CN106991408A (en) 2017-04-14 2017-04-14 The generation method and method for detecting human face of a kind of candidate frame generation network

Country Status (1)

Country Link
CN (1) CN106991408A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545263A (en) * 2017-08-02 2018-01-05 清华大学 A kind of object detecting method and device
CN108009544A (en) * 2017-12-13 2018-05-08 北京小米移动软件有限公司 Object detection method and device
CN108304820A (en) * 2018-02-12 2018-07-20 腾讯科技(深圳)有限公司 A kind of method for detecting human face, device and terminal device
CN108960340A (en) * 2018-07-23 2018-12-07 电子科技大学 Convolutional neural networks compression method and method for detecting human face
CN109271970A (en) * 2018-10-30 2019-01-25 北京旷视科技有限公司 Face datection model training method and device
CN109446922A (en) * 2018-10-10 2019-03-08 中山大学 A kind of method for detecting human face of real-time robust
CN109492685A (en) * 2018-10-31 2019-03-19 中国矿业大学 A kind of target object visible detection method for symmetrical feature
CN109543498A (en) * 2017-10-16 2019-03-29 浙江工商大学 A kind of method for detecting lane lines based on multitask network
CN109558902A (en) * 2018-11-20 2019-04-02 成都通甲优博科技有限责任公司 A kind of fast target detection method
CN109614929A (en) * 2018-12-11 2019-04-12 济南浪潮高新科技投资发展有限公司 Method for detecting human face and system based on more granularity cost-sensitive convolutional neural networks
CN109741318A (en) * 2018-12-30 2019-05-10 北京工业大学 The real-time detection method of single phase multiple dimensioned specific objective based on effective receptive field
CN109767427A (en) * 2018-12-25 2019-05-17 北京交通大学 The detection method of train rail fastener defect
CN110009702A (en) * 2019-04-16 2019-07-12 聊城大学 Fall webworms larva net curtain image position method for intelligence spray robot
CN110069959A (en) * 2018-01-22 2019-07-30 中国移动通信有限公司研究院 A kind of method for detecting human face, device and user equipment
CN111627044A (en) * 2020-04-26 2020-09-04 上海交通大学 Target tracking attack and defense method based on deep network
CN114048489A (en) * 2021-09-01 2022-02-15 广东智媒云图科技股份有限公司 Human body attribute data processing method and device based on privacy protection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208977A1 (en) * 2011-11-02 2013-08-15 Nec Laboratories America, Inc. Receptive field learning for pooled image features
CN105701460A (en) * 2016-01-07 2016-06-22 王跃明 Video-based basketball goal detection method and device
CN106384345A (en) * 2016-08-31 2017-02-08 上海交通大学 RCNN based image detecting and flow calculating method
CN106485230A (en) * 2016-10-18 2017-03-08 中国科学院重庆绿色智能技术研究院 Based on the training of the Face datection model of neutral net, method for detecting human face and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208977A1 (en) * 2011-11-02 2013-08-15 Nec Laboratories America, Inc. Receptive field learning for pooled image features
CN105701460A (en) * 2016-01-07 2016-06-22 王跃明 Video-based basketball goal detection method and device
CN106384345A (en) * 2016-08-31 2017-02-08 上海交通大学 RCNN based image detecting and flow calculating method
CN106485230A (en) * 2016-10-18 2017-03-08 中国科学院重庆绿色智能技术研究院 Based on the training of the Face datection model of neutral net, method for detecting human face and system

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545263A (en) * 2017-08-02 2018-01-05 清华大学 A kind of object detecting method and device
CN109543498B (en) * 2017-10-16 2022-02-18 浙江工商大学 Lane line detection method based on multitask network
CN109543498A (en) * 2017-10-16 2019-03-29 浙江工商大学 A kind of method for detecting lane lines based on multitask network
CN108009544A (en) * 2017-12-13 2018-05-08 北京小米移动软件有限公司 Object detection method and device
CN108009544B (en) * 2017-12-13 2021-08-31 北京小米移动软件有限公司 Target detection method and device
CN110069959A (en) * 2018-01-22 2019-07-30 中国移动通信有限公司研究院 A kind of method for detecting human face, device and user equipment
CN108304820A (en) * 2018-02-12 2018-07-20 腾讯科技(深圳)有限公司 A kind of method for detecting human face, device and terminal device
CN108304820B (en) * 2018-02-12 2020-10-13 腾讯科技(深圳)有限公司 Face detection method and device and terminal equipment
CN108960340A (en) * 2018-07-23 2018-12-07 电子科技大学 Convolutional neural networks compression method and method for detecting human face
CN108960340B (en) * 2018-07-23 2021-08-31 电子科技大学 Convolutional neural network compression method and face detection method
CN109446922A (en) * 2018-10-10 2019-03-08 中山大学 A kind of method for detecting human face of real-time robust
CN109446922B (en) * 2018-10-10 2021-01-08 中山大学 Real-time robust face detection method
CN109271970A (en) * 2018-10-30 2019-01-25 北京旷视科技有限公司 Face datection model training method and device
CN109492685A (en) * 2018-10-31 2019-03-19 中国矿业大学 A kind of target object visible detection method for symmetrical feature
CN109492685B (en) * 2018-10-31 2022-05-24 煤炭科学研究总院 Target object visual detection method for symmetric characteristics
CN109558902A (en) * 2018-11-20 2019-04-02 成都通甲优博科技有限责任公司 A kind of fast target detection method
CN109614929A (en) * 2018-12-11 2019-04-12 济南浪潮高新科技投资发展有限公司 Method for detecting human face and system based on more granularity cost-sensitive convolutional neural networks
CN109767427A (en) * 2018-12-25 2019-05-17 北京交通大学 The detection method of train rail fastener defect
CN109741318A (en) * 2018-12-30 2019-05-10 北京工业大学 The real-time detection method of single phase multiple dimensioned specific objective based on effective receptive field
CN110009702A (en) * 2019-04-16 2019-07-12 聊城大学 Fall webworms larva net curtain image position method for intelligence spray robot
CN110009702B (en) * 2019-04-16 2023-08-04 聊城大学 Fall webworm larva screen image positioning method for intelligent spraying robot
CN111627044A (en) * 2020-04-26 2020-09-04 上海交通大学 Target tracking attack and defense method based on deep network
CN111627044B (en) * 2020-04-26 2022-05-03 上海交通大学 Target tracking attack and defense method based on deep network
CN114048489A (en) * 2021-09-01 2022-02-15 广东智媒云图科技股份有限公司 Human body attribute data processing method and device based on privacy protection

Similar Documents

Publication Publication Date Title
CN106991408A (en) The generation method and method for detecting human face of a kind of candidate frame generation network
CN111539469B (en) Weak supervision fine-grained image identification method based on vision self-attention mechanism
CN105069413B (en) A kind of human posture's recognition methods based on depth convolutional neural networks
CN107169974A (en) It is a kind of based on the image partition method for supervising full convolutional neural networks more
CN109711262B (en) Intelligent excavator pedestrian detection method based on deep convolutional neural network
CN106981080A (en) Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
JP6788264B2 (en) Facial expression recognition method, facial expression recognition device, computer program and advertisement management system
CN108961245A (en) Picture quality classification method based on binary channels depth parallel-convolution network
CN109902546A (en) Face identification method, device and computer-readable medium
CN107016406A (en) The pest and disease damage image generating method of network is resisted based on production
CN104700076B (en) Facial image virtual sample generation method
CN107506722A (en) One kind is based on depth sparse convolution neutral net face emotion identification method
CN107150347A (en) Robot perception and understanding method based on man-machine collaboration
CN107818302A (en) Non-rigid multiple dimensioned object detecting method based on convolutional neural networks
CN107408211A (en) Method for distinguishing is known again for object
CN107316058A (en) Improve the method for target detection performance by improving target classification and positional accuracy
CN109934115A (en) Construction method, face identification method and the electronic equipment of human face recognition model
CN110619319A (en) Improved MTCNN model-based face detection method and system
CN105772407A (en) Waste classification robot based on image recognition technology
CN104299245B (en) Augmented reality tracking based on neutral net
CN107038422A (en) The fatigue state recognition method of deep learning is constrained based on space geometry
CN109558902A (en) A kind of fast target detection method
JP2018165948A (en) Image recognition device, image recognition method, computer program, and product monitoring system
CN108121995A (en) For identifying the method and apparatus of object
CN107358262A (en) The sorting technique and sorter of a kind of high-definition picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170728