CN107871134A - A kind of method for detecting human face and device - Google Patents
A kind of method for detecting human face and device Download PDFInfo
- Publication number
- CN107871134A CN107871134A CN201610848895.3A CN201610848895A CN107871134A CN 107871134 A CN107871134 A CN 107871134A CN 201610848895 A CN201610848895 A CN 201610848895A CN 107871134 A CN107871134 A CN 107871134A
- Authority
- CN
- China
- Prior art keywords
- image
- training
- face
- confidence level
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiments of the invention provide a kind of method for detecting human face and device, wherein method for detecting human face includes carrying out classification processing to testing image using the first convolution neural network model of training in advance, at least one candidate region is filtered out from testing image input area, and classification processing is carried out to candidate region using the second convolution neural network model of training in advance respectively, therefrom filter out at least one selected areas, further according to selected at least one selected areas, detection block elimination and polymerization are carried out, to obtain Face datection result.Because the input area of the first convolutional neural networks can be very small, so as to improve the calculating speed of Face datection.Further, since using two convolutional neural networks models of different depth, secondary classification is performed to obtained candidate region so that classification prediction is more accurate, while filters out a large amount of flase drop samples, and detection performance gets a promotion.
Description
Technical field
The present embodiments relate to artificial intelligence field, more particularly to a kind of method for detecting human face and device.
Background technology
Face datection refers to the process of position and the size that all faces are determined from input area.At face information
A key technology in reason, Face datection are premise and the basis of many automatic facial images analysis applications, as recognition of face,
Face registration, face tracking, face character identification etc., and the first step of modern man-machine interactive system.Moreover, it is big at present
Most digital cameras, which all embedded in human face detection tech, to be carried out auto-focusing, many social networks such as FaceBook etc. and utilizes face
Detection technique realizes image labeling.
With the development of artificial intelligence, method for detecting human face has also obtained certain development, but still comes with some shortcomings.Such as
Based on the method for lifting cascade, although very fast using the cascade detectors calculating speed of single integrogram feature, only can
Front face image is handled, for the Face datection poor-performing of the complex conditions such as attitudes vibration, partial occlusion and illumination;Base
In the method for units of variance modelling technique, although detection performance is preferable, detected in the case of being blocked in face part
Face, but due to needing to train a hidden support vector machine classifier to close to find the geometry between these parts and part
System, calculating cost is higher, causes to calculate than relatively time-consuming.
Therefore, the detection performance of Face datection how is lifted while calculating speed is ensured, is urgently to be resolved hurrily at present
Technical problem.
The content of the invention
The embodiment of the present invention provides a kind of method for detecting human face, with solve the detection performance of Face datection in the prior art and
Calculating speed can not meet the problem of user's request simultaneously.
In order to solve the above problems, the invention discloses a kind of method for detecting human face, including:
Classification processing is carried out to testing image using the first convolution neural network model of training in advance, determined described to be measured
The first face confidence level of each input area, is filtered out according to the first face confidence level from the input area in image
At least one candidate region, first convolutional neural networks include m layer convolutional layers;
Classification processing is carried out to the candidate region using the second convolution neural network model of training in advance respectively, it is determined that
Second face confidence level of the candidate region, at least one selected areas, institute are filtered out according to the second face confidence level
Stating the second convolutional neural networks includes k layer convolutional layers, wherein, k and m is positive integer and k is more than m;
According to selected at least one selected areas, detection block elimination and polymerization are carried out, to obtain Face datection result.
Wherein, before carrying out classification processing to testing image using the first convolution neural network model of training in advance, also
Including:When obtaining input area from the testing image using image pyramid method, by the first convolution of training in advance
The full articulamentum of network structure is converted to full convolutional layer in neural network model and the second convolution neural network model.
According to selected at least one selected areas, detection block elimination and polymerization are carried out, to obtain Face datection result
Before, in addition to:Using the second convolution neural network model, bounding box recurrence processing is carried out respectively to each selected areas.
Preferably, according to selected at least one selected areas, detection block elimination and polymerization are carried out, to obtain face inspection
Result is surveyed, including:Confidence level sequence is carried out to the selected areas, obtains confidence level highest detection block;With the confidence level
Centered on highest detection block, the detection block that surrounding is more than to the first overlapping degree is eliminated;With the confidence level highest
Centered on detection block, the detection block that surrounding is more than to the second overlapping degree is polymerized to a face detection block, and by highest confidence
The confidence level as the polymerization result is spent, to obtain the Face datection result.
Preferably, methods described also includes the step of image classification task of training convolutional neural networks, wherein the volume
Product neutral net includes first convolutional neural networks and/or second convolutional neural networks:Marked from comprising face
Human face data collection as training sample, the training image in the training sample is cut out;According to the figure for cutting out to obtain
As the overlapping degree with real human face mark, positive sample and negative sample are chosen;To the positive sample and born by the convolutional layer
Sample image carries out filtration treatment, obtains the characteristic pattern of the training image, and by the pond layer to passing through the convolution
The image of layer filtration treatment carries out pond, reduces the characteristic vector of the convolutional layer output;The characteristic pattern is connected into one entirely
Individual vector, the vectorial Feature Mapping is come out by activation primitive, obtain image classification result, described image classification results
Including facial image classification and non-face image category;Loss function by described image classification results by image classification task
It is iterated.
Preferably, methods described also includes the step of bounding box recurrence task of the second convolutional neural networks of training:Pass through
By the coordinate for the positive sample for cutting out to obtain compared with the coordinate truly marked, obtain training bounding box and return task
Positive sample output label;By the positive sample output label and negative sample output label set in advance, returned by bounding box
The ranking operation of task loss function and classification task loss function is iterated.
Present invention also offers a kind of human face detection device, including:
Presort module, for being carried out using the first convolution neural network model of training in advance to testing image at classification
Reason, the first face confidence level of each input area in the testing image is determined, according to the first face confidence level from described
At least one candidate region is filtered out in input area, first convolutional neural networks include m layer convolutional layers;
Secondary classification module, for the second convolution neural network model using training in advance respectively to the candidate region
Carry out classification processing, determine the second face confidence level of the candidate region, according to the second face confidence level filter out to
A few selected areas, second convolutional neural networks include k layer convolutional layers, wherein, k and m is positive integer and k is more than m;
Non-maximum restraining module, for according to selected at least one selected areas, carrying out detection block elimination and polymerization,
To obtain Face datection result.
Preferably, described device also includes:Module is redefined, mapping is treated from described using image pyramid method for working as
When input area is obtained as in, by net in the first convolution neural network model and the second convolution neural network model of training in advance
The full articulamentum of network structure is converted to full convolutional layer.
Preferably, described device also includes:Bounding box regression block, for using the second convolutional neural networks mould
Type, carry out bounding box recurrence processing respectively to each selected areas.
Wherein, the non-maximum restraining module, specifically for carrying out confidence level sequence to the selected areas, confidence is obtained
Spend highest detection block;Centered on the confidence level highest detection block, surrounding is more than to the detection block of the first overlapping degree
Eliminated;Centered on the confidence level highest detection block, the detection block that surrounding is more than to the second overlapping degree is polymerized to
One face detection block, to obtain the Face datection result.
Preferably, described device also includes the first training mould of the image classification task for training convolutional neural networks
Block.
Wherein, first training module, specifically for being used as training sample from the human face data collection comprising face mark
This, is cut out to the training image in the training sample;Marked according to the image for cutting out to obtain with real human face overlapping
Degree, choose positive sample and negative sample;Filtration treatment is carried out to the positive sample and negative sample image by the convolutional layer, obtained
The characteristic pattern of the training image is obtained, and pond is carried out to the image Jing Guo the convolutional layer filtration treatment by the pond layer
Change, reduce the characteristic vector of the convolutional layer output;The characteristic pattern is connected into a vector entirely, by activation primitive by institute
The Feature Mapping for stating vector comes out, and obtains image classification result, and described image classification results include facial image classification and inhuman
Face image classification;Described image classification results are iterated by the loss function of image classification task.
Second training module, specifically for by by the coordinate for the positive sample for cutting out to obtain with truly marking
Coordinate is compared, and obtains the positive sample output label that training bounding box returns task;By the positive sample output label and in advance
The negative sample output label first set, the weighting that the loss function and classification task loss function of task are returned by bounding box are transported
Add row iteration.
To sum up, the embodiment of the present invention is by the first convolution neural network model of training in advance, according to each in testing image
The face confidence level in region, classification processing is carried out to each region in the testing image, obtains at least one candidate region, then lead to
The second convolution neural network model of training in advance is crossed, according to the face confidence level of the candidate region, to the candidate region
Secondary classification processing is carried out, obtains at least one selected areas, according to selected at least one selected areas, carries out detection block
Eliminate and polymerize, to obtain Face datection result.Because the input area of the first convolutional neural networks can be very small, so as to carry
The calculating speed of Face datection is risen.Further, since using two convolutional neural networks models of different depth, to obtained time
Favored area performs secondary classification so that classification prediction is more accurate, while filters out a large amount of flase drop samples, and detection performance is carried
Rise.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention
Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is a kind of flow chart of method for detecting human face provided in an embodiment of the present invention;
Fig. 2 is the stream that the first convolution neural network model is trained in a kind of method for detecting human face provided in an embodiment of the present invention
Cheng Tu;
Fig. 3 is to train the second convolution neural network model image in a kind of method for detecting human face provided in an embodiment of the present invention
The flow chart of classification task;
Fig. 4 is to train the second convolution neural network model border in a kind of method for detecting human face provided in an embodiment of the present invention
Frame returns the flow chart of task;
Fig. 5 is the flow chart in Face datection stage in a kind of method for detecting human face provided in an embodiment of the present invention;
Fig. 6 is a kind of schematic network structure of first convolutional neural networks provided in an embodiment of the present invention;
Fig. 7 is a kind of schematic network structure of second convolutional neural networks provided in an embodiment of the present invention;
Fig. 8 is a kind of testing result of the Face datection stage provided in an embodiment of the present invention after non-maximum restraining operates
In detection image schematic diagram;
Fig. 9 is a kind of testing result of the Face datection stage provided in an embodiment of the present invention after non-maximum restraining operates
Image schematic diagram after middle NMS-max operations;
Figure 10 is a kind of detection knot of the Face datection stage provided in an embodiment of the present invention after non-maximum restraining operates
Testing result schematic diagram in fruit after NMS-Average operations;
Figure 11 is recall rate and the relation schematic diagram of flase drop number in a kind of Face datection performance provided in an embodiment of the present invention;
Figure 12 is a kind of Face datection test result schematic diagram provided in an embodiment of the present invention;
Figure 13 is a kind of structured flowchart of human face detection device provided in an embodiment of the present invention;
Figure 14 is the structured flowchart of another human face detection device provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, rather than whole embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
Embodiment one
Reference picture 1, give a kind of flow chart of method for detecting human face provided in an embodiment of the present invention.
Step 101, classification processing is carried out to testing image using the first convolution neural network model of training in advance, it is determined that
First face confidence level of each input area in the testing image, according to the first face confidence level from the input area
In filter out at least one candidate region.
Wherein, the first convolutional neural networks include m layer convolutional layers.
Specifically, the first convolutional neural networks are the depth convolutional neural networks for having deep learning ability, including one
Or multiple convolutional layers and pond layer, deep learning can be realized, compared with other deep learning structures, depth convolutional neural networks
More prominent performance is shown in terms of image recognition.
, can be by using the data set comprising abundant face markup information to be used as in advance before being detected to face
Training sample, the image classification task of the first convolutional neural networks is trained, obtains first with image classification effect
Convolutional neural networks model.
Testing image is tested using the trained first convolution neural network model, it is to be measured that this can be obtained
The face confidence level in each region in image.Wherein, face confidence level is the probability that the image in the region is face.The face is put
Reliability with preset the first face confidence level threshold value compared with, each area image can be classified, distinguish human face region and
Non-face region, and human face region is chosen as candidate region.
Step 102, classification processing is carried out to candidate region using the second convolution neural network model of training in advance respectively,
The second face confidence level of candidate region is determined, at least one selected areas is filtered out according to the second face confidence level.
Wherein, the second convolutional neural networks, including k layer convolutional layers, k and m are positive integer, and k is more than m.Second convolution
Neutral net is also the depth convolutional neural networks for having deep learning ability, and with more more than the first convolutional neural networks
Convolutional layer, i.e. second convolutional neural networks have deeper depth, can obtain more accurate classification prediction.
Before classification processing is carried out to candidate region using the second convolution neural network model, can use in advance comprising
The data set of abundant face markup information is trained, acquisition has image as training sample to the second convolutional neural networks
Classification and the second convolution neural network model of bounding box regression.
Candidate region is classified using the trained second convolution neural network model, each candidate can be obtained
The second face confidence level in region.Second face confidence level is higher than the region of default confidence threshold value as selected areas, from
And each area image is further classified, the filtering to candidate region is realized, therefrom filters out flase drop region.
Step 103, according to selected at least one selected areas, detection block elimination and polymerization are carried out, to obtain face
Testing result.
Specifically, the second convolution neural network model obtained by training in advance, enters row bound to the selected areas
Frame returns, and obtains Face datection result.Each selected areas after handling bounding box recurrence carries out confidence level sequence, obtains confidence
Highest detection block is spent, centered on the confidence level highest detection block, surrounding is more than to the detection block of the first overlapping degree
Eliminated, and update confidence level sequence, then centered on the confidence level highest detection block, it is overlapping that surrounding is more than second
The detection block of degree is polymerized to a face detection block, to obtain the Face datection result.
Using the second convolution neural network model in the training in advance stage, the training process that task is returned to bounding box
The bounding box regression parameter of acquisition, the Face datection frame of selected areas can be adjusted, obtain the face of a high quality
Detection block.
To sum up, the embodiment of the present invention is by the first convolution neural network model of training in advance, according to each in testing image
The face confidence level in region, classification processing is carried out to each region in the testing image, obtains at least one candidate region, then lead to
The second convolution neural network model of training in advance is crossed, according to the face confidence level of the candidate region, to the candidate region
Secondary classification processing is carried out, obtains at least one selected areas, according to selected at least one selected areas, carries out detection block
Eliminate and polymerize, to obtain Face datection result.Because the input area of the first convolutional neural networks can be very small, so as to carry
The calculating speed of Face datection is risen.Further, since using two convolutional neural networks models of different depth, to obtained time
Favored area performs secondary classification so that classification prediction is more accurate, while filters out a large amount of flase drop samples, and detection performance is carried
Rise.
Embodiment two
Reference picture 2, on the basis of above-described embodiment, the present embodiment is discussed further method for detecting human face.
In one alternate embodiment, before Face datection is carried out to image, in addition to the first convolutional neural networks
Model and the second convolution neural network model are trained.
Discussed separately below with Fig. 2 to Fig. 4 embodiment, the first convolution neural network model and the second convolutional neural networks
The flow of model training.
Reference picture 2, give and the first convolution nerve net is trained in a kind of method for detecting human face provided in an embodiment of the present invention
The flow chart of network model:
Step 201, from the human face data collection marked comprising face as training sample, to the instruction in the training sample
Practice image to be cut out.
Optionally, using WIDER FACE data sets as training sample, wherein WIDER FACE data sets include abundant
There is the human face data of mark, including block, the image in the case of postural change and various activities scene etc. are a variety of.
Optionally, the sample concentrated in a manner of sliding window to human face data is cut.For example, by an image
The length of side of the maximum as base window in the height value and width value of minimum human face region, 1 times from the base window length of side,
The window of 0.7 times and 1.4 times size removes random clipping image.
Step 202, meet to be more than first threshold with the overlapping region of real human face mark in the image that will cut out to obtain
Image is as positive sample.
An external boundary region is determined at centre distance window preset distance truly being marked with sample image, for example,
Truly marked with sample image and determine an external boundary region at 1.5 times of distances of centre distance window.Using cunning in the region
The method of dynamic window is cut out, and will meet to be more than the image of first threshold as positive sample with the overlapping region of real human face mark.
Wherein, the evaluation criterion of overlapping region can represent that IOU is defined as using IOU (Intersection Over Union)
The ratio between the intersecting area of two detection blocks and the combined region of two detection blocks.For example, when first threshold value is 0.6, i.e.,
Take IOU>0.6 region is as positive sample.
Step 203, meet to be less than Second Threshold with the overlapping region of real human face mark in the image that will cut out to obtain
Image is as negative sample.
Go to cut out at random using different windows on the entire image, will meet small with the overlapping region of real human face mark
In Second Threshold image as negative sample.For example, when Second Threshold value is 0.3, that is, take IOU<0.3 region is as negative
Sample.
Step 204, m layer filtration treatments are carried out to the positive and negative samples image by the m layers convolutional layer, described in acquisition
The characteristic pattern of training image.
The positive and negative samples are inputted with preset ratio model training is carried out into the first convolutional neural networks.Wherein,
One convolutional neural networks, which are one, includes the convolutional neural networks of m layers convolutional layer and n-layer pond layer for image classification.For example,
The convolutional neural networks containing the convolutional layer that is of five storeys can be selected to filter image.
Optionally, it is trained the positive and negative samples coloured image of pre-set dimension is inputted to the first convolutional neural networks
Before, it is necessary to carry out pretreatment operation to image, the pretreatment operation includes the red, green, blue pixel value with positive and negative samples image
Default fixed average is individually subtracted, for example, intermediate grey values 127.5 can be individually subtracted in red, green, blue pixel value.And returned
One change is handled.
Pretreated image is inputted to the first convolutional neural networks and is trained.Every layer of convolutional layer using it is a series of compared with
Small convolution collecting image is filtered.For example, intensive filtration treatment is carried out using 3 × 3 convolution collecting image, by each
The step-length of 3 × 3 convolution kernels is fixed as 1, and image boundary filling is set to 1.It is right after image is inputted during convolutional network
Input picture is filled, so as to calculate the response of the convolution of image boundarg pixel point, so as to obtain complete information, to realize more
Good effect.
Step 205, n pond, drop are carried out to the image Jing Guo the convolutional layer filtration treatment by n-layer pond layer
The characteristic vector of the low convolutional layer output.
Manipulated by the pondization of pond layer, n pond is carried out to the image Jing Guo convolutional layer filtration treatment, to reduce convolution
The characteristic vector of layer output, increases the port number of characteristic pattern, and causes feature to have translation invariance, so as to obtain more robust
Feature.For example, optional n=2, i.e., carry out pond using the convolutional neural networks containing 2 layers of pond layer to the result after convolution,
If the port number of fisrt feature figure is 96, after 2 layers of pond layer, the port number of characteristic pattern increases to 256.
Optionally, pondization operation uses pyramid pond.Wherein, pyramid pondization can be the volume of the image of any yardstick
Product feature changes into identical dimensional, and this can not only allow convolutional neural networks to handle the image of any yardstick, due to being counted to reduce
Calculation amount, the first convolutional neural networks input size in the embodiment of the present invention is smaller, can be detected using pyramid pond method
To the image of reduced size, therefore have very important significance.
Step 206, the characteristic pattern is connected into a vector entirely.
, will be complete by the characteristic vector of m layers convolutional layer and n-layer pond layer by the full articulamentum of the first convolutional neural networks
Connect into a vector.
Optionally, Dropout regularization method is used after full articulamentum, to prevent over-fitting.
Step 207, the vectorial Feature Mapping is come out by activation primitive, obtains image classification result, the figure
As classification results include facial image classification and non-face image category.
In the network structure of the first convolutional neural networks, Batch Normalization are used after each convolution operation
Operation.Wherein, Batch Normalization refer to, when doing stochastic gradient descent adjusting parameter every time, pass through minimum batch
Standardized operation is done to the activation value of network.In the training stage, the change of parameter causes network activation Distribution value to change, profit
Go to alleviate this conversion with Batch Normalization operations, enable the network to converge to a good result faster, and
And Batch Normalization operations can remove training network using larger learning rate, do more generally initialization operation.
The hidden layer behind convolutional layer is used as activation using amendment linear unit (Rectified linear unit, ReLU) afterwards
Function, solve the problems, such as due to saturation problem caused by network relatively depth and gradient disperse, the Feature Mapping of vector is come out, obtain
Image classification result, i.e., it is facial image and inhuman face image by image classification.
Step 208, described image classification results are iterated by the loss function of image classification task, described in acquisition
First convolution neural network model.
Training for the first convolution neural network model stage, the image of setting quantity is used to update network ginseng per batch processed
Number, for example, opening image per batch processed 128, initial learning rate is set to 0.01, and momentum is set to 0.9.Pass through image classification task
Loss function is iterated, to obtain the first convolution neural network model.For example, using softmax loss functions as image
The loss function of classification task, terminate after iteration 5,000,000 times and obtain the first convolution neural network model, wherein, softmax is
To the function of classification effect.
Reference picture 3, give and the second convolution nerve net is trained in a kind of method for detecting human face provided in an embodiment of the present invention
The flow chart of network model image classification task:
Step 301, from the human face data collection marked comprising face as training sample, to the instruction in the training sample
Practice image to be cut out.
Optionally, using WIDER FACE data sets as training sample, wherein WIDER FACE data sets include abundant
There is the human face data of mark, including block, the image in the case of postural change and various activities scene etc. are a variety of.
Optionally, the sample concentrated in a manner of sliding window to human face data is cut.For example, by an image
The length of side of the maximum as base window in the height value and width value of minimum human face region, 1 times from the base window length of side,
The window of 0.7 times and 1.4 times size removes random clipping image.
Step 302, meet to be more than the 3rd threshold value with the overlapping region of real human face mark in the image that will cut out to obtain
Image is as positive sample, wherein the 3rd threshold value is more than first threshold.
An external boundary region is determined at centre distance window preset distance truly being marked with sample image, for example,
Truly marked with sample image and determine an external boundary region at 1.5 times of distances of centre distance window.Using cunning in the region
The method of dynamic window is cut out, and will meet to be more than the image of the 3rd threshold value as positive sample with the overlapping region of real human face mark.
Wherein, the 3rd threshold value is more than first threshold.For example, when the 3rd threshold value value is 0.8, that is, take IOU>0.8 region is as just
Sample.
Step 303, meet to be less than the 4th threshold value with the overlapping region of real human face mark in the image that will cut out to obtain
Image is as negative sample.
Go to cut out at random using different windows on the entire image, will meet small with the overlapping region of real human face mark
In the 4th threshold value image as negative sample.For example, when the 4th threshold value value is 0.3, that is, take IOU<0.3 region is as negative
Sample.
Step 304, k layer filtration treatments are carried out to the positive and negative samples image by the k layers convolutional layer, described in acquisition
The characteristic pattern of training image.
The positive and negative samples are inputted with preset ratio model training is carried out into the second convolutional neural networks.Wherein,
Two convolutional neural networks, which are one, includes the convolutional neural networks of k layers convolutional layer and i layers pond layer for image classification, wherein the
The convolutional layer number of plies k of two convolutional neural networks is more than the convolutional layer number of stories m of the first convolutional neural networks.Contain 7 for example, can be selected
The convolutional neural networks of layer convolutional layer filter to image.Using 3 × 3 convolution kernel, this structure adds whole net indirectly
The depth of network, improves detection performance.
Optionally, it is trained the positive and negative samples coloured image of pre-set dimension is inputted to the second convolutional neural networks
Before, it is necessary to carry out pretreatment operation to image, the pretreatment operation includes the red, green, blue pixel value with positive and negative samples image
Default fixed average is individually subtracted, for example, intermediate grey values 127.5 can be individually subtracted in red, green, blue pixel value.And returned
One change is handled.
Pretreated image is inputted to the second convolutional neural networks and is trained.Every layer of convolutional layer using it is a series of compared with
Small convolution collecting image is filtered.For example, intensive filtration treatment is carried out using 3 × 3 convolution collecting image, by each
The step-length of 3 × 3 convolution kernels is fixed as 1, and image boundary filling is set to 1.
Step 305, i pond, drop are carried out to the image Jing Guo the convolutional layer filtration treatment by the i layers pond layer
The characteristic vector of the low convolutional layer output.
Manipulated by the pondization of pond layer, i pond is carried out to the image Jing Guo convolutional layer filtration treatment, to reduce convolution
The characteristic vector of layer output, increases the port number of characteristic pattern, and causes feature to have translation invariance, so as to obtain more robust
Feature.Because the convolutional layer number of plies k of the second convolutional neural networks is more than the convolutional layer number of stories m of the first convolutional neural networks, it is
It is big that the characteristic vector for exporting convolutional layer controls the pondization that volume Two within the specific limits, can be made to accumulate neutral net to count i layer by layer
N is counted layer by layer in the pondization of the first convolutional neural networks.For example, the convolutional neural networks containing the pond layer that haves three layers can be selected to convolution
Result afterwards carries out pond, if the port number of characteristic pattern is 96 after the second convolutional layer, after 3 layers of pond layer, characteristic pattern
Port number increases to 512.
Step 306, the characteristic pattern is connected into a vector entirely.
, will be complete by the characteristic vector of k layers convolutional layer and i layers pond layer by the full articulamentum of the second convolutional neural networks
Connect into a vector.
Optionally, Dropout regularization method is used after full articulamentum, to prevent over-fitting.
Step 307, the vectorial Feature Mapping is come out by the activation primitive of image classification task, obtains image and appoint
Business classification results.
In the network structure of the second convolutional neural networks, Batch Normalization are used after each convolution operation
Operation.In the training stage, the change of parameter causes network activation Distribution value to change, and utilizes Batch Normalization
Operation goes to alleviate this conversion, enables the network to converge to a good result faster, and Batch Normalization
Operation can remove training network using larger learning rate, do more generally initialization operation.It is hidden behind convolutional layer afterwards
Layer is hidden using amendment linear unit as activation primitive, is solved because saturation problem caused by network relatively depth and gradient disperse are asked
Topic, the Feature Mapping of vector is come out, obtain image classification result, i.e., be facial image and inhuman face image by image classification.
Step 308, described image classification results are iterated by the loss function of image classification task.
Training for the second convolution neural network model stage, the image of setting quantity is used to update network ginseng per batch processed
Number, for example, opening image per batch processed 128, initial learning rate is set to 0.01, and momentum is set to 0.9.Pass through image classification task
Loss function is iterated, to obtain the first convolution neural network model.For example, using softmax loss functions as image
The loss function of classification task, 500W rear termination of iteration, is completed to the second convolution neural network model image classification task
Training, wherein, softmax is the function for playing classification effect.
Reference picture 4, give and the second convolution nerve net is trained in a kind of method for detecting human face provided in an embodiment of the present invention
Network model boundary frame returns the flow chart of task:
Step 401, obtained by the coordinate of the positive sample that will cut out to obtain compared with the coordinate truly marked
Bounding box is trained to return the positive sample output label of task.
During the second convolutional neural networks are trained, by the coordinate for the positive sample for cutting out to obtain with truly marking
Label of the difference of coordinate as training recurrence task.For example, the positive sample bounding box coordinates for cutting out to obtain are [x'0,y'0,
x1',y1'], wherein, (x'0,y'0) and (x1',y1') for the coordinate value in the positive sample upper left corner and the lower right corner.With its center away from
[x is truly labeled as from nearest face0,y0,x1,y1], wherein, (x0,y0) and (x1,y1) be real human face region the upper left corner
With lower right corner coordinate value, if the output label of positive sample is set toThen wherein:
Step 402, by the positive sample output label and negative sample output label set in advance, returned by bounding box
The loss function of task is iterated.
By positive sample output label and negative sample output label set in advance, inputted with preset ratio to the second convolution god
Through carrying out network training in network.For example, negative sample output label can be set to [0, -1, -1, -1, -1], by positive negative sample
With 1:3 ratio carries out network training.The loss function that task is returned by bounding box is iterated.For example, using Europe it is several in
Loss function of the loss function as bounding box recurrence task, terminate after iteration 5,000,000 times, complete to the second convolution nerve net
Network model boundary frame returns the training of task.
On the basis of above-described embodiment, discussed separately below with Fig. 5 embodiment, pass through trained first convolution
The flow of neural network model and the second convolution neural network model to Face datection.
Reference picture 5, give the flow in Face datection stage in a kind of method for detecting human face provided in an embodiment of the present invention
Figure:
Step 501, in the first convolution neural network model and the second convolution neural network model that training in advance are obtained
The full articulamentum of network structure is converted to full convolutional layer, to obtain input area using image pyramid method.
Due to original sliding window method, the mode amount of calculation of acquisition input area is larger from testing image, in order to
Reduce amount of calculation, be changed to use full convolutional network to obtain the input area of arbitrary size.
Step 502, the first convolution neural network model obtained according to training in advance carries out classification processing to testing image
First face confidence level of each input area in the testing image obtained, screened from input area and obtain candidate region.
Pre-set zoom yardstick and zoom factor construction image pyramid are chosen, with various sizes of face in detection image.
For example, it is 2 to choose zoom scale, zoom factor is 0.7937 construction image pyramid, detects different sizes in each image
Face.Input using every piece image in image pyramid as the first convolutional neural networks, by m layers convolutional layer, n-layer
Pond layer, and full convolutional layer, meet with a response figure.Wherein, the image obtained using image pyramid progress scaling is passed through
The operation of convolution, pond and full convolution, forward direction test network obtain the response diagram of artwork.Each point in response diagram, it is corresponding
A detection window in original image.Each point in response diagram can correspond to the face location of the relevant position in artwork, this
The probability of point just represents the confidence level of corresponding face, judges original according to the comparison of the confidence value of response diagram and confidence threshold value
Whether corresponding position is face in figure, that is, judges the confidence level of this detection block of artwork.Set compared with low confidence threshold, such as
Confidence threshold value is arranged to 0.5, seriously blocks, obscure and the various complex situations such as attitudes vibration for ensureing to detect
Under human face region.The point that will be above the threshold value is corresponded in artwork according to the zoom scale of input area, is obtained under the yardstick
Testing result be candidate region.Reference picture 6, give a kind of first convolutional neural networks provided in an embodiment of the present invention
Schematic network structure.
Step 503, the candidate region is classified respectively using the second convolution neural network model of training in advance
Processing, determines the second face confidence level of the candidate region, at least one choosing is filtered out according to the second face confidence level
Middle region.
It regard the candidate region obtained by the first convolution neural network model as the defeated of the second convolution neural network model
Enter, secondary classification is carried out to this candidate region.Specifically, using candidate region as input, by k layers convolutional layer, i layers pond
Layer, and full convolutional layer, meet with a response figure.The confidence level of each point in response diagram is calculated, by confidence value higher than default
The candidate regions of confidence threshold value are as selected areas.Reference picture 7, give a kind of second convolution god provided in an embodiment of the present invention
Schematic network structure through network.
Step 504, the second convolution neural network model obtained by training in advance, bounding box is carried out to selected areas and returned
Return, obtain Face datection frame.
Bounding box in the second convolution neural network model obtained using training in advance returns task, and selected areas is entered
Row bound frame returns, and the bounding box of the selected areas after bounding box recurrence processing is Face datection frame.If for example, obtained time
The coordinate of favored area is [x0”,y0”,x1”,y1"], by training the regressive object value obtained to beCarry out accordingly
The testing result that bounding box is adjusted so as to the end isWhereinWithFor the coordinate value in the detection block upper left corner and the lower right corner after recurrence.Time that this stage is obtained using the training stage
Return parameter to be adjusted Face datection frame, obtain the Face datection frame of a high quality.
Step 505, the selected areas after handling bounding box recurrence carries out detection block non-maximum restraining operation, by difference
The Face datection frame of yardstick is shown on same image.
Obtained using a kind of efficient post-processing approach NMS (Non-maximal suppression, non-maxima suppression)
Final testing result, the testing result under different scale is included on same image, to ensure high recall rate.Wherein,
NMS is a kind of post-processing approach of Face datection frame, the purpose is to ensure that each face object only corresponds to a detection block, is eliminated
Optimal detection zone is obtained after unnecessary overlapping detection block, wherein including NMS_max methods and NMS_average methods.It is first
The detection window with maximum confidence is found first with NMS_max methods, then removes that all to meet that IOU is more than certain overlapping
The detection block of threshold value.Then detection block is polymerize using NMS_average methods, finds the detection window with highest confidence level
Mouthful, all detection blocks for being more than a certain threshold value with window IOU are polymerized to a detection block, while ensure each detection block
External boundary is no more than the 10% of highest confidence level detection window size, the confidence using highest confidence level as last testing result
Degree.In the case of the face quantity in can not determining image, after the polymerization of first man face detection block is completed, to confidence level
Ranking results are updated, and centered on the high detection block of confidence level second in the ranking results after the renewal, are eliminated
And polymerization.Until all detection blocks higher than default confidence threshold value are fully completed above-mentioned elimination and gathered in confidence level ranking results
Closing operation.
Reference picture 8 to Figure 10 give a kind of Face datection stage provided in an embodiment of the present invention by non-maximum restraining grasp
Testing result schematic diagram after work.
The embodiment of the present invention is using FDDB data sets as the sample for testing the method for detecting human face, wherein FDDB data sets
It is the standard database currently used for weighing Face datection, wherein comprising 2846 images, totally 5171 mark face.This data
Collection includes the image of different postures, illumination, low resolution and the complex situations such as out of focus.The present invention flase drop number be 1000 when,
Reach 87.46% recall rate on FDDB data sets.Reference picture 11 gives a kind of Face datection provided in an embodiment of the present invention
Recall rate and the relation schematic diagram of flase drop number in performance.Reference picture 12 gives a kind of Face datection provided in an embodiment of the present invention
Test result schematic diagram.
To sum up, the first convolution neural network model that the embodiment of the present invention is obtained by training in advance, according to testing image
In each region face confidence level, carry out classification processing to each region in the testing image, obtain candidate region, then by pre-
The second convolution neural network model of acquisition is first trained, according to the face confidence level of the candidate region, to the candidate region
Classification processing is carried out, obtains selected areas, the selected areas is to filter out the region behind flase drop region in candidate region, simultaneously
The the second convolution neural network model obtained by the training in advance, bounding box recurrence is carried out to the selected areas, obtained
Face datection result.Due to the method using secondary classification so that the input area of the first convolutional neural networks can be very small,
And by using the method for image pyramid the image of reduced size is detected, so as to ensure that Face datection
Calculating speed lifting.In addition, by the way that in the Face datection stage, the full articulamentum in network structure is converted into full convolutional layer
Extract the candidate region of image so that the human face region of different size diverse locations does secondary classification processing, avoids face area
The problems such as domain is limited and influences detection performance.Also, due to deeper using a relative first convolution neural network model depth
The second convolution neural network model, double tasks that secondary classification and bounding box return are performed to obtained candidate region so that
Classification prediction is more accurate, and detection block is closely located to truly mark, while filters out a large amount of flase drop samples, and detection performance obtains
Lifting.
It should be noted that for embodiment of the method, in order to be briefly described, therefore it is all expressed as to a series of action group
Close, but those skilled in the art should know, the embodiment of the present invention is not limited by described sequence of movement, because according to
According to the embodiment of the present invention, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art also should
Know, embodiment described in this description belongs to preferred embodiment, and the involved action not necessarily present invention is implemented
Necessary to example.
Embodiment three
On the basis of above-described embodiment, the present embodiment additionally provides a kind of human face detection device, applied to artificial intelligence
Terminal.
Reference picture 13 gives a kind of structured flowchart of human face detection device provided in an embodiment of the present invention, can specifically wrap
Include following module:
Module of presorting 1301, for being divided using the first convolution neural network model of training in advance testing image
Class processing, determines the first face confidence level of each input area in the testing image, according to the first face confidence level from
At least one candidate region is filtered out in the input area, first convolutional neural networks include m layer convolutional layers.
Secondary classification module 1302, for the second convolution neural network model using training in advance respectively to the candidate
Region carries out classification processing, determines the second face confidence level of the candidate region, is screened according to the second face confidence level
Going out at least one selected areas, second convolutional neural networks include k layer convolutional layers, wherein, k and m is positive integer and k is big
In m.
Non-maximum restraining module 1303, for according to selected at least one selected areas, carry out detection block eliminate and
Polymerization, to obtain Face datection result.
Reference picture 14, in the optional implementation of the present invention, on the basis of Figure 13, human face detection device is also wrapped
Include following module:Redefine module 1304 and bounding box regression block 1305.
The network structure of the first convolution neural network model and the second convolution neural network model includes full during training in advance
Articulamentum, redefine module 1304, for when using image pyramid method obtain input area from the testing image when,
Before carrying out classification processing to testing image using the first convolution neural network model of training in advance, by the first of training in advance
The full articulamentum of network structure is converted to full convolutional layer in convolutional neural networks model and the second convolution neural network model.
Bounding box regression block 1305, for using the second convolution neural network model, each selected areas is distinguished
Carry out bounding box recurrence processing.
Based on this, non-maximum restraining module 1303, specifically for carrying out confidence level sequence to the selected areas, put
Reliability highest detection block;Centered on the confidence level highest detection block, surrounding is more than to the detection of the first overlapping degree
Frame is eliminated;Centered on the confidence level highest detection block, the detection block that surrounding is more than to the second overlapping degree polymerize
For a face detection block, to obtain the Face datection result.
Further, human face detection device also includes following module:Image classification for training convolutional neural networks is appointed
First training module 1306 of business, and for training the bounding box of the second convolutional neural networks to return the second training mould of task
Block 1307.
Wherein, convolutional neural networks include first convolutional neural networks and/or second convolutional neural networks.
Specifically, the first training module 1306, specifically for being used as training from the human face data collection comprising face mark
Sample, the training image in the training sample is cut out;The weight marked according to the image for cutting out to obtain with real human face
Folded degree, chooses positive sample and negative sample;Filtration treatment is carried out to the positive sample and negative sample image by the convolutional layer,
The characteristic pattern of the training image is obtained, and pond is carried out to the image Jing Guo the convolutional layer filtration treatment by the pond layer
Change, reduce the characteristic vector of the convolutional layer output;The characteristic pattern is connected into a vector entirely, by activation primitive by institute
The Feature Mapping for stating vector comes out, and obtains image classification result, and described image classification results include facial image classification and inhuman
Face image classification;Described image classification results are iterated by the loss function of image classification task.
Second training module 1307, specifically for by by the coordinate for the positive sample for cutting out to obtain with truly marking
Coordinate is compared, and obtains the positive sample output label that training bounding box returns task;By the positive sample output label and in advance
The negative sample output label first set, the loss function that task is returned by bounding box are iterated.
To sum up, the embodiment of the present invention is by module 1301 of presorting, according to the face confidence level in each region in testing image,
Classification processing is carried out to each region in the testing image, obtains at least one candidate region, then pass through secondary classification module
1302, according to the face confidence level of the candidate region, secondary classification processing is carried out to the candidate region, obtained at least one
Selected areas, according to selected at least one selected areas, using non-maximum restraining module 1303 carry out detection block eliminate and
Polymerization, to obtain Face datection result.Because the input area of the first convolutional neural networks can be very small, so as to improve people
The calculating speed of face detection.Further, since using two convolutional neural networks models of different depth, to obtained candidate region
Perform secondary classification so that classification prediction is more accurate, while filters out a large amount of flase drop samples, and detection performance gets a promotion.
For device embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, it is related
Part illustrates referring to the part of embodiment of the method.
Each embodiment in this specification is described by the way of progressive, what each embodiment stressed be with
The difference of other embodiment, between each embodiment identical similar part mutually referring to.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can be provided as method, apparatus or calculate
Machine program product.Therefore, the embodiment of the present invention can use complete hardware embodiment, complete software embodiment or combine software and
The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can use one or more wherein include computer can
With in the computer-usable storage medium (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code
The form of the computer program product of implementation.
The embodiment of the present invention is with reference to method according to embodiments of the present invention, terminal device (system) and computer program
The flow chart and/or block diagram of product describes.It should be understood that can be by computer program instructions implementation process figure and/or block diagram
In each flow and/or square frame and the flow in flow chart and/or block diagram and/or the combination of square frame.These can be provided
Computer program instructions are set to all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing terminals
Standby processor is to produce a machine so that is held by the processor of computer or other programmable data processing terminal equipments
Capable instruction is produced for realizing in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames
The device for the function of specifying.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing terminal equipments
In the computer-readable memory to work in a specific way so that the instruction being stored in the computer-readable memory produces bag
The manufacture of command device is included, the command device is realized in one flow of flow chart or multiple flows and/or one side of block diagram
The function of being specified in frame or multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing terminal equipments so that
Series of operation steps is performed on computer or other programmable terminal equipments to produce computer implemented processing, so that
The instruction performed on computer or other programmable terminal equipments is provided for realizing in one flow of flow chart or multiple flows
And/or specified in one square frame of block diagram or multiple square frames function the step of.
Although having been described for the preferred embodiment of the embodiment of the present invention, those skilled in the art once know base
This creative concept, then other change and modification can be made to these embodiments.So appended claims are intended to be construed to
Including preferred embodiment and fall into having altered and changing for range of embodiment of the invention.
Finally, it is to be noted that, herein, such as first and second or the like relational terms be used merely to by
One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation
Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant meaning
Covering including for nonexcludability, so that process, method, article or terminal device including a series of elements are not only wrapped
Those key elements, but also the other element including being not expressly set out are included, or is also included for this process, method, article
Or the key element that terminal device is intrinsic.In the absence of more restrictions, wanted by what sentence "including a ..." limited
Element, it is not excluded that other identical element in the process including the key element, method, article or terminal device also be present.
Above to a kind of method for detecting human face provided by the present invention and device, it is described in detail, it is used herein
Specific case is set forth to the principle and embodiment of the present invention, and the explanation of above example is only intended to help and understands
The method and its core concept of the present invention;Meanwhile for those of ordinary skill in the art, according to the thought of the present invention, having
There will be changes in body embodiment and application, in summary, this specification content should not be construed as to the present invention
Limitation.
Claims (12)
1. a kind of method for detecting human face, it is characterised in that methods described includes:
Classification processing is carried out to testing image using the first convolution neural network model of training in advance, determines the testing image
In each input area the first face confidence level, filtered out at least from the input area according to the first face confidence level
One candidate region, first convolutional neural networks include m layer convolutional layers;
Classification processing is carried out to the candidate region using the second convolution neural network model of training in advance respectively, it is determined that described
Second face confidence level of candidate region, at least one selected areas is filtered out according to the second face confidence level, described
Two convolutional neural networks include k layer convolutional layers, wherein, k and m is positive integer and k is more than m;
According to selected at least one selected areas, detection block elimination and polymerization are carried out, to obtain Face datection result.
2. according to the method for claim 1, it is characterised in that the first convolution neural network model described in during training in advance and
The network structure of second convolution neural network model includes full articulamentum, first convolutional neural networks using training in advance
Before model carries out classification processing to testing image, in addition to:
When obtaining input area from the testing image using image pyramid method, by the first convolution god of training in advance
Full articulamentum through network structure in network model and the second convolution neural network model is converted to full convolutional layer.
3. according to the method for claim 1, it is characterised in that at least one selected areas selected by the basis, enter
Row detection block eliminates and polymerization, with before obtaining Face datection result, in addition to:
Using the second convolution neural network model, bounding box recurrence processing is carried out respectively to each selected areas.
4. according to the method for claim 3, it is characterised in that at least one selected areas selected by the basis, enter
Row detection block eliminates and polymerization, to obtain Face datection result, including:
Confidence level sequence is carried out to the selected areas, obtains confidence level highest detection block;
Centered on the confidence level highest detection block, the detection block that surrounding is more than to the first overlapping degree is eliminated;
Centered on the confidence level highest detection block, the detection block that surrounding is more than to the second overlapping degree is polymerized to a people
Face detection block, and the confidence level using highest confidence level as the polymerization result, to obtain the Face datection result.
5. according to the method for claim 1, it is characterised in that methods described also includes the image of training convolutional neural networks
The step of classification task, wherein the convolutional neural networks include first convolutional neural networks and/or second convolution
Neutral net:
From the human face data collection marked comprising face as training sample, the training image in the training sample is cut
Cut out;
The overlapping degree marked according to the image for cutting out to obtain with real human face, choose positive sample and negative sample;
Filtration treatment is carried out to the positive sample and negative sample image by the convolutional layer, obtains the feature of the training image
Figure, and pond is carried out to the image Jing Guo the convolutional layer filtration treatment by the pond layer, reduce the convolutional layer output
Characteristic vector;
The characteristic pattern is connected into a vector entirely, the vectorial Feature Mapping is come out by activation primitive, schemed
As classification results, described image classification results include facial image classification and non-face image category;
Described image classification results are iterated by the loss function of image classification task.
6. according to the method for claim 5, it is characterised in that methods described also includes the second convolutional neural networks of training
Bounding box returns the step of task:
The coordinate of the positive sample by that will cut out to obtain obtains training bounding box and returned compared with the coordinate truly marked
Return the positive sample output label of task;
By the positive sample output label and negative sample output label set in advance, task loss function is returned by bounding box
It is iterated with the ranking operation of classification task loss function.
7. a kind of human face detection device, it is characterised in that described device includes:
Presort module, for carrying out classification processing to testing image using the first convolution neural network model of training in advance,
The first face confidence level of each input area in the testing image is determined, according to the first face confidence level from the input
At least one candidate region is filtered out in region, first convolutional neural networks include m layer convolutional layers;
Secondary classification module, the candidate region is carried out respectively for the second convolution neural network model using training in advance
Classification is handled, and determines the second face confidence level of the candidate region, at least one is filtered out according to the second face confidence level
Individual selected areas, second convolutional neural networks include k layer convolutional layers, wherein, k and m is positive integer and k is more than m;
Non-maximum restraining module, for according to selected at least one selected areas, detection block elimination and polymerization being carried out, to obtain
To Face datection result.
8. device according to claim 7, it is characterised in that the first convolution neural network model described in during training in advance and
The network structure of second convolution neural network model includes full articulamentum, and described device also includes:
Module is redefined, for when use image pyramid method obtains input area from the testing image, inciting somebody to action advance
The full articulamentum of network structure is converted to entirely in the first convolution neural network model and the second convolution neural network model of training
Convolutional layer.
9. device according to claim 7, it is characterised in that the human face detection device, in addition to:
Bounding box regression block, for using the second convolution neural network model, row bound is entered respectively to each selected areas
Frame recurrence is handled.
10. device according to claim 9, it is characterised in that
The non-maximum restraining module, specifically for carrying out confidence level sequence to the selected areas, obtain confidence level highest
Detection block;Centered on the confidence level highest detection block, the detection block that surrounding is more than to the first overlapping degree is eliminated;
Centered on the confidence level highest detection block, the detection block that surrounding is more than to the second overlapping degree is polymerized to a face inspection
Frame is surveyed, to obtain the Face datection result.
11. device according to claim 7, it is characterised in that described device also includes being used for training convolutional neural networks
Image classification task the first training module, wherein the convolutional neural networks include first convolutional neural networks and/
Or second convolutional neural networks;
First training module, specifically for selecting the human face data collection comprising face mark as training sample, to described
Training image in training sample is cut out;
The overlapping degree marked according to the image for cutting out to obtain with real human face, choose positive sample and negative sample;
Filtration treatment is carried out to the positive sample and negative sample image by the convolutional layer, obtains the feature of the training image
Figure, and pond is carried out to the image Jing Guo the convolutional layer filtration treatment by the pond layer, reduce the convolutional layer output
Characteristic vector;
The characteristic pattern is connected into a vector entirely, the vectorial Feature Mapping is come out by activation primitive, schemed
As classification results, described image classification results include facial image classification and non-face image category;
Described image classification results are iterated by the loss function of image classification task.
12. device according to claim 11, it is characterised in that described device also includes being used to train the second convolutional Neural
The bounding box of network returns the second training module of task;
Second training module, specifically for by by the coordinate for the positive sample for cutting out to obtain and the coordinate that truly marks
It is compared, obtains the positive sample output label that training bounding box returns task;
By the positive sample output label and negative sample output label set in advance, pass through the loss letter of bounding box recurrence task
The ranking operation of number and classification task loss function is iterated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610848895.3A CN107871134A (en) | 2016-09-23 | 2016-09-23 | A kind of method for detecting human face and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610848895.3A CN107871134A (en) | 2016-09-23 | 2016-09-23 | A kind of method for detecting human face and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107871134A true CN107871134A (en) | 2018-04-03 |
Family
ID=61751747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610848895.3A Pending CN107871134A (en) | 2016-09-23 | 2016-09-23 | A kind of method for detecting human face and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107871134A (en) |
Cited By (58)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764164A (en) * | 2018-05-30 | 2018-11-06 | 华中科技大学 | A kind of method for detecting human face and system based on deformable convolutional network |
CN108846440A (en) * | 2018-06-20 | 2018-11-20 | 腾讯科技(深圳)有限公司 | Image processing method and device, computer-readable medium and electronic equipment |
CN108875833A (en) * | 2018-06-22 | 2018-11-23 | 北京智能管家科技有限公司 | Training method, face identification method and the device of neural network |
CN108960064A (en) * | 2018-06-01 | 2018-12-07 | 重庆锐纳达自动化技术有限公司 | A kind of Face datection and recognition methods based on convolutional neural networks |
CN108960069A (en) * | 2018-06-05 | 2018-12-07 | 天津大学 | A method of the enhancing context for single phase object detector |
CN108985147A (en) * | 2018-05-31 | 2018-12-11 | 成都通甲优博科技有限责任公司 | Object detection method and device |
CN109145854A (en) * | 2018-08-31 | 2019-01-04 | 东南大学 | A kind of method for detecting human face based on concatenated convolutional neural network structure |
CN109190512A (en) * | 2018-08-13 | 2019-01-11 | 成都盯盯科技有限公司 | Method for detecting human face, device, equipment and storage medium |
CN109214389A (en) * | 2018-09-21 | 2019-01-15 | 上海小萌科技有限公司 | A kind of target identification method, computer installation and readable storage medium storing program for executing |
CN109255767A (en) * | 2018-09-26 | 2019-01-22 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN109255382A (en) * | 2018-09-07 | 2019-01-22 | 阿里巴巴集团控股有限公司 | For the nerve network system of picture match positioning, method and device |
CN109360183A (en) * | 2018-08-20 | 2019-02-19 | 中国电子进出口有限公司 | A kind of quality of human face image appraisal procedure and system based on convolutional neural networks |
CN109447943A (en) * | 2018-09-21 | 2019-03-08 | 中国科学院深圳先进技术研究院 | A kind of object detection method, system and terminal device |
CN109543648A (en) * | 2018-11-30 | 2019-03-29 | 公安部交通管理科学研究所 | It is a kind of to cross face extraction method in vehicle picture |
CN109558779A (en) * | 2018-07-06 | 2019-04-02 | 北京字节跳动网络技术有限公司 | Image detecting method and device |
CN109613002A (en) * | 2018-11-21 | 2019-04-12 | 腾讯科技(深圳)有限公司 | A kind of glass defect detection method, apparatus and storage medium |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109670429A (en) * | 2018-12-10 | 2019-04-23 | 广东技术师范学院 | A kind of the monitor video multiple target method for detecting human face and system of Case-based Reasoning segmentation |
CN109784148A (en) * | 2018-12-06 | 2019-05-21 | 北京飞搜科技有限公司 | Biopsy method and device |
CN109799905A (en) * | 2018-12-28 | 2019-05-24 | 深圳云天励飞技术有限公司 | A kind of hand tracking and advertisement machine |
CN109815814A (en) * | 2018-12-21 | 2019-05-28 | 天津大学 | A kind of method for detecting human face based on convolutional neural networks |
CN109829434A (en) * | 2019-01-31 | 2019-05-31 | 杭州创匠信息科技有限公司 | Method for anti-counterfeit and device based on living body texture |
CN109919214A (en) * | 2019-02-27 | 2019-06-21 | 南京地平线机器人技术有限公司 | A kind of training method and training device of neural network model |
CN109934115A (en) * | 2019-02-18 | 2019-06-25 | 苏州市科远软件技术开发有限公司 | Construction method, face identification method and the electronic equipment of human face recognition model |
CN110084191A (en) * | 2019-04-26 | 2019-08-02 | 广东工业大学 | A kind of eye occlusion detection method and system |
CN110188730A (en) * | 2019-06-06 | 2019-08-30 | 山东大学 | Face datection and alignment schemes based on MTCNN |
CN110188627A (en) * | 2019-05-13 | 2019-08-30 | 睿视智觉(厦门)科技有限公司 | A kind of facial image filter method and device |
CN110210314A (en) * | 2019-05-06 | 2019-09-06 | 深圳市华付信息技术有限公司 | Method for detecting human face, device, computer equipment and storage medium |
CN110263774A (en) * | 2019-08-19 | 2019-09-20 | 珠海亿智电子科技有限公司 | A kind of method for detecting human face |
CN110472640A (en) * | 2019-08-15 | 2019-11-19 | 山东浪潮人工智能研究院有限公司 | A kind of target detection model prediction frame processing method and processing device |
CN110490115A (en) * | 2019-08-13 | 2019-11-22 | 北京达佳互联信息技术有限公司 | Training method, device, electronic equipment and the storage medium of Face datection model |
CN110490170A (en) * | 2019-08-27 | 2019-11-22 | 浙江中正智能科技有限公司 | A kind of face candidate frame extracting method |
CN110569700A (en) * | 2018-09-26 | 2019-12-13 | 阿里巴巴集团控股有限公司 | method and device for optimizing damage identification result |
CN110659550A (en) * | 2018-06-29 | 2020-01-07 | 比亚迪股份有限公司 | Traffic sign recognition method, traffic sign recognition device, computer equipment and storage medium |
CN110796154A (en) * | 2018-08-03 | 2020-02-14 | 华为技术有限公司 | Method, device and equipment for training object detection model |
CN110889421A (en) * | 2018-09-07 | 2020-03-17 | 杭州海康威视数字技术股份有限公司 | Target detection method and device |
CN110909688A (en) * | 2019-11-26 | 2020-03-24 | 南京甄视智能科技有限公司 | Face detection small model optimization training method, face detection method and computer system |
CN111368707A (en) * | 2020-03-02 | 2020-07-03 | 佛山科学技术学院 | Face detection method, system, device and medium based on feature pyramid and dense block |
CN111382643A (en) * | 2018-12-30 | 2020-07-07 | 广州市百果园信息技术有限公司 | Gesture detection method, device, equipment and storage medium |
CN111382638A (en) * | 2018-12-29 | 2020-07-07 | 广州市百果园信息技术有限公司 | Image detection method, device, equipment and storage medium |
CN111582207A (en) * | 2020-05-13 | 2020-08-25 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111860077A (en) * | 2019-04-30 | 2020-10-30 | 北京眼神智能科技有限公司 | Face detection method, face detection device, computer-readable storage medium and equipment |
CN111861966A (en) * | 2019-04-18 | 2020-10-30 | 杭州海康威视数字技术股份有限公司 | Model training method and device and defect detection method and device |
CN112001204A (en) * | 2019-05-27 | 2020-11-27 | 北京君正集成电路股份有限公司 | Training method of network model for secondary face detection |
CN112001205A (en) * | 2019-05-27 | 2020-11-27 | 北京君正集成电路股份有限公司 | Network model sample collection method for secondary face detection |
CN112016464A (en) * | 2020-08-28 | 2020-12-01 | 中移(杭州)信息技术有限公司 | Method and device for detecting face shielding, electronic equipment and storage medium |
CN112232292A (en) * | 2020-11-09 | 2021-01-15 | 泰康保险集团股份有限公司 | Face detection method and device applied to mobile terminal |
CN112232270A (en) * | 2020-10-29 | 2021-01-15 | 广西科技大学 | MDSSD face detection method based on model quantization |
CN112287741A (en) * | 2020-06-19 | 2021-01-29 | 北京京东尚科信息技术有限公司 | Image processing-based farming operation management method and device |
CN112349150A (en) * | 2020-11-19 | 2021-02-09 | 飞友科技有限公司 | Video acquisition method and system for airport flight guarantee time node |
CN112347843A (en) * | 2020-09-18 | 2021-02-09 | 深圳数联天下智能科技有限公司 | Method and related device for training wrinkle detection model |
CN112580395A (en) * | 2019-09-29 | 2021-03-30 | 深圳市光鉴科技有限公司 | Depth information-based 3D face living body recognition method, system, device and medium |
CN112651322A (en) * | 2020-12-22 | 2021-04-13 | 北京眼神智能科技有限公司 | Cheek shielding detection method and device and electronic equipment |
CN112989913A (en) * | 2019-12-16 | 2021-06-18 | 辉达公司 | Neural network based face analysis using facial markers and associated confidence values |
CN113051960A (en) * | 2019-12-26 | 2021-06-29 | 深圳市光鉴科技有限公司 | Depth map face detection method, system, device and storage medium |
CN113642353A (en) * | 2020-04-27 | 2021-11-12 | Tcl科技集团股份有限公司 | Training method of face detection model, storage medium and terminal equipment |
CN113887541A (en) * | 2021-12-06 | 2022-01-04 | 北京惠朗时代科技有限公司 | Multi-region employee number detection method applied to company management |
WO2024011859A1 (en) * | 2022-07-13 | 2024-01-18 | 天翼云科技有限公司 | Neural network-based face detection method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160148079A1 (en) * | 2014-11-21 | 2016-05-26 | Adobe Systems Incorporated | Object detection using cascaded convolutional neural networks |
CN105868689A (en) * | 2016-02-16 | 2016-08-17 | 杭州景联文科技有限公司 | Cascaded convolutional neural network based human face occlusion detection method |
CN105912990A (en) * | 2016-04-05 | 2016-08-31 | 深圳先进技术研究院 | Face detection method and face detection device |
-
2016
- 2016-09-23 CN CN201610848895.3A patent/CN107871134A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160148079A1 (en) * | 2014-11-21 | 2016-05-26 | Adobe Systems Incorporated | Object detection using cascaded convolutional neural networks |
CN105868689A (en) * | 2016-02-16 | 2016-08-17 | 杭州景联文科技有限公司 | Cascaded convolutional neural network based human face occlusion detection method |
CN105912990A (en) * | 2016-04-05 | 2016-08-31 | 深圳先进技术研究院 | Face detection method and face detection device |
Non-Patent Citations (3)
Title |
---|
HAOXIANG LI ET.AL: "A Convolutional Neural Network Cascade for Face Detection", 《CVPR2015》 * |
JUNAM SONG ET.AL: "Fast and Robust Face Detection based on CNN in Wild Environment", 《JOURNAL OF KOREA MULTIMEDIA SOCIETY》 * |
KAIPENG ZHANG ET.AL: "Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks", 《IEEE SIGNAL PROCESSING LETTERS》 * |
Cited By (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764164A (en) * | 2018-05-30 | 2018-11-06 | 华中科技大学 | A kind of method for detecting human face and system based on deformable convolutional network |
CN108985147A (en) * | 2018-05-31 | 2018-12-11 | 成都通甲优博科技有限责任公司 | Object detection method and device |
CN108960064A (en) * | 2018-06-01 | 2018-12-07 | 重庆锐纳达自动化技术有限公司 | A kind of Face datection and recognition methods based on convolutional neural networks |
CN108960069A (en) * | 2018-06-05 | 2018-12-07 | 天津大学 | A method of the enhancing context for single phase object detector |
CN108846440B (en) * | 2018-06-20 | 2023-06-02 | 腾讯科技(深圳)有限公司 | Image processing method and device, computer readable medium and electronic equipment |
CN108846440A (en) * | 2018-06-20 | 2018-11-20 | 腾讯科技(深圳)有限公司 | Image processing method and device, computer-readable medium and electronic equipment |
CN108875833A (en) * | 2018-06-22 | 2018-11-23 | 北京智能管家科技有限公司 | Training method, face identification method and the device of neural network |
CN110659550A (en) * | 2018-06-29 | 2020-01-07 | 比亚迪股份有限公司 | Traffic sign recognition method, traffic sign recognition device, computer equipment and storage medium |
CN109558779A (en) * | 2018-07-06 | 2019-04-02 | 北京字节跳动网络技术有限公司 | Image detecting method and device |
CN110796154A (en) * | 2018-08-03 | 2020-02-14 | 华为技术有限公司 | Method, device and equipment for training object detection model |
CN110796154B (en) * | 2018-08-03 | 2023-03-24 | 华为云计算技术有限公司 | Method, device and equipment for training object detection model |
US11605211B2 (en) | 2018-08-03 | 2023-03-14 | Huawei Cloud Computing Technologies Co., Ltd. | Object detection model training method and apparatus, and device |
CN109190512A (en) * | 2018-08-13 | 2019-01-11 | 成都盯盯科技有限公司 | Method for detecting human face, device, equipment and storage medium |
CN109360183A (en) * | 2018-08-20 | 2019-02-19 | 中国电子进出口有限公司 | A kind of quality of human face image appraisal procedure and system based on convolutional neural networks |
CN109145854A (en) * | 2018-08-31 | 2019-01-04 | 东南大学 | A kind of method for detecting human face based on concatenated convolutional neural network structure |
CN109255382A (en) * | 2018-09-07 | 2019-01-22 | 阿里巴巴集团控股有限公司 | For the nerve network system of picture match positioning, method and device |
CN109255382B (en) * | 2018-09-07 | 2020-07-17 | 阿里巴巴集团控股有限公司 | Neural network system, method and device for picture matching positioning |
CN110889421A (en) * | 2018-09-07 | 2020-03-17 | 杭州海康威视数字技术股份有限公司 | Target detection method and device |
CN109214389B (en) * | 2018-09-21 | 2021-09-28 | 上海小萌科技有限公司 | Target identification method, computer device and readable storage medium |
CN109447943A (en) * | 2018-09-21 | 2019-03-08 | 中国科学院深圳先进技术研究院 | A kind of object detection method, system and terminal device |
CN109447943B (en) * | 2018-09-21 | 2020-08-14 | 中国科学院深圳先进技术研究院 | Target detection method, system and terminal equipment |
CN109214389A (en) * | 2018-09-21 | 2019-01-15 | 上海小萌科技有限公司 | A kind of target identification method, computer installation and readable storage medium storing program for executing |
CN109255767A (en) * | 2018-09-26 | 2019-01-22 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN110569700A (en) * | 2018-09-26 | 2019-12-13 | 阿里巴巴集团控股有限公司 | method and device for optimizing damage identification result |
CN109613002A (en) * | 2018-11-21 | 2019-04-12 | 腾讯科技(深圳)有限公司 | A kind of glass defect detection method, apparatus and storage medium |
CN109543648A (en) * | 2018-11-30 | 2019-03-29 | 公安部交通管理科学研究所 | It is a kind of to cross face extraction method in vehicle picture |
CN109543648B (en) * | 2018-11-30 | 2022-06-17 | 公安部交通管理科学研究所 | Method for extracting face in car passing picture |
CN109784148A (en) * | 2018-12-06 | 2019-05-21 | 北京飞搜科技有限公司 | Biopsy method and device |
CN109670429A (en) * | 2018-12-10 | 2019-04-23 | 广东技术师范学院 | A kind of the monitor video multiple target method for detecting human face and system of Case-based Reasoning segmentation |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109815814A (en) * | 2018-12-21 | 2019-05-28 | 天津大学 | A kind of method for detecting human face based on convolutional neural networks |
CN109815814B (en) * | 2018-12-21 | 2023-01-24 | 天津大学 | Face detection method based on convolutional neural network |
CN109799905A (en) * | 2018-12-28 | 2019-05-24 | 深圳云天励飞技术有限公司 | A kind of hand tracking and advertisement machine |
CN111382638A (en) * | 2018-12-29 | 2020-07-07 | 广州市百果园信息技术有限公司 | Image detection method, device, equipment and storage medium |
CN111382638B (en) * | 2018-12-29 | 2023-08-29 | 广州市百果园信息技术有限公司 | Image detection method, device, equipment and storage medium |
CN111382643B (en) * | 2018-12-30 | 2023-04-14 | 广州市百果园信息技术有限公司 | Gesture detection method, device, equipment and storage medium |
CN111382643A (en) * | 2018-12-30 | 2020-07-07 | 广州市百果园信息技术有限公司 | Gesture detection method, device, equipment and storage medium |
CN109829434A (en) * | 2019-01-31 | 2019-05-31 | 杭州创匠信息科技有限公司 | Method for anti-counterfeit and device based on living body texture |
CN109934115A (en) * | 2019-02-18 | 2019-06-25 | 苏州市科远软件技术开发有限公司 | Construction method, face identification method and the electronic equipment of human face recognition model |
CN109919214A (en) * | 2019-02-27 | 2019-06-21 | 南京地平线机器人技术有限公司 | A kind of training method and training device of neural network model |
CN109919214B (en) * | 2019-02-27 | 2023-07-21 | 南京地平线机器人技术有限公司 | Training method and training device for neural network model |
CN111861966A (en) * | 2019-04-18 | 2020-10-30 | 杭州海康威视数字技术股份有限公司 | Model training method and device and defect detection method and device |
CN111861966B (en) * | 2019-04-18 | 2023-10-27 | 杭州海康威视数字技术股份有限公司 | Model training method and device and defect detection method and device |
CN110084191B (en) * | 2019-04-26 | 2024-02-23 | 广东工业大学 | Eye shielding detection method and system |
CN110084191A (en) * | 2019-04-26 | 2019-08-02 | 广东工业大学 | A kind of eye occlusion detection method and system |
CN111860077A (en) * | 2019-04-30 | 2020-10-30 | 北京眼神智能科技有限公司 | Face detection method, face detection device, computer-readable storage medium and equipment |
CN110210314A (en) * | 2019-05-06 | 2019-09-06 | 深圳市华付信息技术有限公司 | Method for detecting human face, device, computer equipment and storage medium |
CN110188627A (en) * | 2019-05-13 | 2019-08-30 | 睿视智觉(厦门)科技有限公司 | A kind of facial image filter method and device |
CN110188627B (en) * | 2019-05-13 | 2021-11-23 | 睿视智觉(厦门)科技有限公司 | Face image filtering method and device |
CN112001205A (en) * | 2019-05-27 | 2020-11-27 | 北京君正集成电路股份有限公司 | Network model sample collection method for secondary face detection |
CN112001204B (en) * | 2019-05-27 | 2024-04-02 | 北京君正集成电路股份有限公司 | Training method of network model for secondary face detection |
CN112001205B (en) * | 2019-05-27 | 2023-10-31 | 北京君正集成电路股份有限公司 | Network model sample acquisition method for secondary face detection |
CN112001204A (en) * | 2019-05-27 | 2020-11-27 | 北京君正集成电路股份有限公司 | Training method of network model for secondary face detection |
CN110188730B (en) * | 2019-06-06 | 2022-12-23 | 山东大学 | MTCNN-based face detection and alignment method |
CN110188730A (en) * | 2019-06-06 | 2019-08-30 | 山东大学 | Face datection and alignment schemes based on MTCNN |
CN110490115A (en) * | 2019-08-13 | 2019-11-22 | 北京达佳互联信息技术有限公司 | Training method, device, electronic equipment and the storage medium of Face datection model |
CN110490115B (en) * | 2019-08-13 | 2021-08-13 | 北京达佳互联信息技术有限公司 | Training method and device of face detection model, electronic equipment and storage medium |
CN110472640B (en) * | 2019-08-15 | 2022-03-15 | 山东浪潮科学研究院有限公司 | Target detection model prediction frame processing method and device |
CN110472640A (en) * | 2019-08-15 | 2019-11-19 | 山东浪潮人工智能研究院有限公司 | A kind of target detection model prediction frame processing method and processing device |
CN110263774A (en) * | 2019-08-19 | 2019-09-20 | 珠海亿智电子科技有限公司 | A kind of method for detecting human face |
CN110490170A (en) * | 2019-08-27 | 2019-11-22 | 浙江中正智能科技有限公司 | A kind of face candidate frame extracting method |
CN110490170B (en) * | 2019-08-27 | 2023-01-06 | 浙江中正智能科技有限公司 | Face candidate frame extraction method |
CN112580395A (en) * | 2019-09-29 | 2021-03-30 | 深圳市光鉴科技有限公司 | Depth information-based 3D face living body recognition method, system, device and medium |
CN110909688A (en) * | 2019-11-26 | 2020-03-24 | 南京甄视智能科技有限公司 | Face detection small model optimization training method, face detection method and computer system |
CN110909688B (en) * | 2019-11-26 | 2020-07-28 | 南京甄视智能科技有限公司 | Face detection small model optimization training method, face detection method and computer system |
US11934955B2 (en) | 2019-12-16 | 2024-03-19 | Nvidia Corporation | Neural network based facial analysis using facial landmarks and associated confidence values |
CN112989913A (en) * | 2019-12-16 | 2021-06-18 | 辉达公司 | Neural network based face analysis using facial markers and associated confidence values |
CN113051960A (en) * | 2019-12-26 | 2021-06-29 | 深圳市光鉴科技有限公司 | Depth map face detection method, system, device and storage medium |
CN111368707B (en) * | 2020-03-02 | 2023-04-07 | 佛山科学技术学院 | Face detection method, system, device and medium based on feature pyramid and dense block |
CN111368707A (en) * | 2020-03-02 | 2020-07-03 | 佛山科学技术学院 | Face detection method, system, device and medium based on feature pyramid and dense block |
CN113642353A (en) * | 2020-04-27 | 2021-11-12 | Tcl科技集团股份有限公司 | Training method of face detection model, storage medium and terminal equipment |
CN113642353B (en) * | 2020-04-27 | 2024-07-05 | Tcl科技集团股份有限公司 | Training method of face detection model, storage medium and terminal equipment |
CN111582207B (en) * | 2020-05-13 | 2023-08-15 | 北京市商汤科技开发有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111582207A (en) * | 2020-05-13 | 2020-08-25 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
WO2021227694A1 (en) * | 2020-05-13 | 2021-11-18 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN112287741A (en) * | 2020-06-19 | 2021-01-29 | 北京京东尚科信息技术有限公司 | Image processing-based farming operation management method and device |
CN112016464A (en) * | 2020-08-28 | 2020-12-01 | 中移(杭州)信息技术有限公司 | Method and device for detecting face shielding, electronic equipment and storage medium |
CN112016464B (en) * | 2020-08-28 | 2024-04-12 | 中移(杭州)信息技术有限公司 | Method and device for detecting face shielding, electronic equipment and storage medium |
CN112347843A (en) * | 2020-09-18 | 2021-02-09 | 深圳数联天下智能科技有限公司 | Method and related device for training wrinkle detection model |
CN112232270A (en) * | 2020-10-29 | 2021-01-15 | 广西科技大学 | MDSSD face detection method based on model quantization |
CN112232292B (en) * | 2020-11-09 | 2023-12-26 | 泰康保险集团股份有限公司 | Face detection method and device applied to mobile terminal |
CN112232292A (en) * | 2020-11-09 | 2021-01-15 | 泰康保险集团股份有限公司 | Face detection method and device applied to mobile terminal |
CN112349150A (en) * | 2020-11-19 | 2021-02-09 | 飞友科技有限公司 | Video acquisition method and system for airport flight guarantee time node |
CN112651322B (en) * | 2020-12-22 | 2024-05-24 | 北京眼神智能科技有限公司 | Cheek shielding detection method and device and electronic equipment |
CN112651322A (en) * | 2020-12-22 | 2021-04-13 | 北京眼神智能科技有限公司 | Cheek shielding detection method and device and electronic equipment |
CN113887541A (en) * | 2021-12-06 | 2022-01-04 | 北京惠朗时代科技有限公司 | Multi-region employee number detection method applied to company management |
WO2024011859A1 (en) * | 2022-07-13 | 2024-01-18 | 天翼云科技有限公司 | Neural network-based face detection method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107871134A (en) | A kind of method for detecting human face and device | |
CN109902677B (en) | Vehicle detection method based on deep learning | |
CN107742107A (en) | Facial image sorting technique, device and server | |
CN109636772A (en) | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning | |
CN109447169A (en) | The training method of image processing method and its model, device and electronic system | |
CN104881662B (en) | A kind of single image pedestrian detection method | |
CN107316058A (en) | Improve the method for target detection performance by improving target classification and positional accuracy | |
CN108334847A (en) | A kind of face identification method based on deep learning under real scene | |
CN108564097A (en) | A kind of multiscale target detection method based on depth convolutional neural networks | |
CN107563412A (en) | A kind of infrared image power equipment real-time detection method based on deep learning | |
CN106469304A (en) | Handwritten signature location positioning method in bill based on depth convolutional neural networks | |
CN107967451A (en) | A kind of method for carrying out crowd's counting to static image using multiple dimensioned multitask convolutional neural networks | |
CN108537117A (en) | A kind of occupant detection method and system based on deep learning | |
CN107688784A (en) | A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features | |
CN106557778A (en) | Generic object detection method and device, data processing equipment and terminal device | |
CN107330453A (en) | The Pornographic image recognizing method of key position detection is recognized and merged based on substep | |
CN107729854A (en) | A kind of gesture identification method of robot, system and robot | |
CN105654067A (en) | Vehicle detection method and device | |
CN104346802B (en) | A kind of personnel leave the post monitoring method and equipment | |
CN108961675A (en) | Fall detection method based on convolutional neural networks | |
CN107871101A (en) | A kind of method for detecting human face and device | |
CN109389599A (en) | A kind of defect inspection method and device based on deep learning | |
CN109635694A (en) | A kind of pedestrian detection method, device, equipment and computer readable storage medium | |
CN109740676A (en) | Object detection moving method based on similar purpose | |
CN105654066A (en) | Vehicle identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180403 |