CN107844785A - A kind of method for detecting human face based on size estimation - Google Patents

A kind of method for detecting human face based on size estimation Download PDF

Info

Publication number
CN107844785A
CN107844785A CN201711294249.8A CN201711294249A CN107844785A CN 107844785 A CN107844785 A CN 107844785A CN 201711294249 A CN201711294249 A CN 201711294249A CN 107844785 A CN107844785 A CN 107844785A
Authority
CN
China
Prior art keywords
models
face
training
size estimation
stage2
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711294249.8A
Other languages
Chinese (zh)
Other versions
CN107844785B (en
Inventor
尚凌辉
王弘玥
张兆生
丁连涛
郑永宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Jieshang Safety Equipment Co.,Ltd.
Original Assignee
ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd filed Critical ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority to CN201711294249.8A priority Critical patent/CN107844785B/en
Publication of CN107844785A publication Critical patent/CN107844785A/en
Application granted granted Critical
Publication of CN107844785B publication Critical patent/CN107844785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of method for detecting human face based on size estimation.The present invention first uses estimation of the face size estimation to face size on image, and further according to face scaling image, Suggestion box extraction is quickly done using full convolutional network, the classification cascaded twice is finally done and recurrence obtains Face datection result.The present invention uses face size estimation and the method for concatenated convolutional neutral net combination to do Face datection, reduces the overall amount of calculation of Face datection, reduces taking for Face datection entirety, and ensure the effect of Face datection.

Description

A kind of method for detecting human face based on size estimation
Technical field
The invention belongs to technical field of video monitoring, is related to a kind of method for detecting human face based on size estimation.
Background technology
The function of method for detecting human face is to judge whether there is face in picture or video, if face, predicts face Position and size.Face datection is the basis that various analyses are carried out to face.The time-consuming of Face datection is the problem of it is crucial One of.In most cases, it is time-consuming in order to reduce, it has to sacrifice the Detection results of a part.Can to the size estimation of face To greatly reduce the time-consuming of Face datection.
In current existing technology,《A kind of method for detecting human face and device -201510639824.8》Use AdaBoost Method, quick multi-Scale Pyramid characteristic extraction on the basis of classified, ensure accuracy in detection premise Under can effectively reduce amount of calculation, but AdaBoost method can not be effectively using current substantial amounts of training data lifting detection Secondary performance.《The method and device -201610206093.2 of Face datection》Using AdaBoost extractions Suggestion box again with convolution god Face datection is done through network, the threshold value selection of the Suggestion box extraction of the method first step needs to adjust under different scenes, than It is cumbersome.《The method and device -201710618497.7 of Face datection》Face datection is done using the convolutional neural networks of cascade, Because overall calculation amount is big, problem is taken than more serious.
The content of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of method for detecting human face based on size estimation.
The present invention first uses estimation of the face size estimation to face size on image, further according to face scaling figure Picture, Suggestion box extraction is quickly done using full convolutional network, finally do the classification cascaded twice and recurrence obtains Face datection result.
The inventive method mainly includes the following steps that:
Step 1: off-line training
1.1st, face size estimation is trained
Metric space is divided into multiple sections by face size estimation.Each section is made a decision, on input picture whether In the presence of the face for belonging to this section.In simple terms, face size estimation be exactly do it is multiple two classification, obtain be to multiple sections The no scores vector in the presence of this yardstick face.
1.1.1, original image is scaled;
1.1.2, according to the wide high mean value computation face yardstick scores vector of the face on zoomed image.For an area Between, if there is the face for belonging to this interval scale, corresponding fraction is set to 1 on scores vector, is positive sample;If do not deposit Belonging to the face of this interval scale, corresponding fraction is set to 0 on scores vector, is negative sample.
1.1.3, the loss function of training uses the cross entropy loss function weighted:
Loss represents loss.N represents Scaling interval quantity.N represents Scaling interval sequence number.wnRepresent n-th of Scaling interval Weight.pnRepresent the fraction of n-th of Scaling interval.pnRepresent the fraction of n-th of Scaling interval.Represent n-th Scaling interval Estimated result.
Further, it is necessary to which closeer must divide Scaling interval, i.e. N meetings when needing to do fine accurate size estimation Become very big.But positive sample quantity is basically unchanged, this can cause positive negative sample uneven, and training is difficult to restrain.Align sample This plus a weight bigger than negative sample can be advantageous to training convergence.The face yardstick scores vector side of labeled data generation Difference is very big, and in order to alleviate this problem, the negative sample weight near positive sample is set into 0.
1.1.4 face yardstick grader, is trained, flip horizontal disturbance is done to image during training.
1.2nd, Stage1 models are trained.Stage1 models are a multi task models, and task is classification and recurrence.Classification is appointed Classification chart tile of being engaged in is face, returns the position that task returns two points in face bounding box upper left and bottom right.Use face Labeled data, generate the training sample of Stage1 models.
1.3, training Stage2 models.Stage2 models are a multi task models, and task is classification and recurrence.Relative to Stage1 models, the classification of Stage2 models and regression capability are stronger, and model size is also bigger.Use the result of Stage1 models Multiple dimensioned scanning is carried out on original image, classification fraction is more than training sample of the scan box result as Stage2 models of threshold value This.
1.4th, Stage3 models are trained.Stage2 models are a multi task models, and task is classification and recurrence.Relative to Stage1 models and Stage2 models, its classification and regression capability is stronger, and model size is also bigger.Using Stage1 models and The result of Stage2 models obtains the training sample of Stage3 models, and the training sample input of Stage3 models is two images Block, one be Stage2 models regression result, one is that the regression results of Stage2 models diffuses into its twice of size outward Image block.After the completion of Stage3 training results, then difficult example is done to Stage3 and is excavated, finely tune model.
Step 2: on-line checking
2.1st, input picture.
2.2nd, the size estimation of face:
2.2.1, input picture is scaled, the face size estimation model trained is inputted, obtains face size estimation Scores vector.
2.2.2, the scores vector of face size estimation is done smoothly.
2.2.3 non-maxima suppression, is done to the scores vector of face size estimation, obtains face yardstick.
2.3rd, Stage1 models are a full convolutional networks.Using face size estimation result, input picture is scaled, Input Stage1 models.
2.4th, the result of Stage1 models is inputted into Stage2 models, does and classify and return.
2.5th, the results of Stage2 models is inputted into Stage3 models, inputted as two image blocks, one is Stage2 moulds The regression result of type, one is that the regression result of Stage2 models diffuses into the frame of its twice of size outward, does and classifies and return.
2.6th, the detection block for obtaining Stage3 models merges, and consolidation strategy uses non-maxima suppression.
2.7th, testing result is exported.
Beneficial effects of the present invention:
1st, using face size estimation, the image pyramid number of plies of follow-up Stage1 models is reduced, accelerates average detected speed Degree.
2nd, face size estimation is made training process easily restrain, is solved sample mark using the cross entropy loss function of weighting Variance problems of too is noted, more accurate size estimation result can be obtained.
3rd, Stage1 models use full convolutional network, reduce convolutional calculation amount, lift detection speed.
4th, using the detection method of cascade, three Stage model structure is from gently to, can obtain again, entirety is preferable to be examined Degree of testing the speed and performance.
5th, the input of Stage3 models also inputs the image block of its twice of size except Stage2 regression result, Make the information of Stage3 mode input face near zones, lift the accuracy rate of detection.
Brief description of the drawings
Fig. 1 is the inventive method flow chart.
Fig. 2 is face yardstick grader network structure.
Fig. 3 is the network structure of Stage1 models.
Fig. 4 is the network structure of Stage2 models.
Fig. 5 is the network structure of Stage3 models.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, clear, complete description is carried out to the technical scheme in the embodiment of the present invention, it is clear that described embodiment is only Only it is part of the embodiment of the present invention, rather than whole embodiments.Based on embodiments of the invention, ordinary skill people The every other embodiment that member is obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
The present invention specific implementation step be:
Step 1: off-line training
1.1st, face size estimation is trained.
1.1.1 original image, is zoomed into 224X224,224 proportional zoom figure is zoomed to by long side, short side is mended 0 and filled out Fill.
1.1.2, according to the wide high mean value computation face scaled target scores vector of the face on zoomed image.Face yardstick Target zone is [22.6,28], space ratio 20.1, it is 55 two classification altogether.Than if any 24One or more this yardstick Face, then for this pictures 24This classification is positive sample.In order to solve the problems, such as positive and negative imbalanced training sets, positive sample Classified weight is 16, and the classified weight of negative sample is 1.In order to solve the problems, such as that labeled data variance is big, by the field of positive sample 1 The classified weight of negative sample be set to 0
1.1.3 face yardstick grader, is trained using caffe, flip horizontal disturbance, network structure are done to image during training See Fig. 2.Conv_blokA contains the convolutional layer of one, active coating a ReLU, a normalization layer BN.Conv_blokA's The convolution kernel size of convolutional layer is 3x3, step-length 1, is filled with 1.Conv_blokB contains the convolutional layer of one, an activation Layer ReLU, a normalization layer BN.The convolution kernel size of conv_blokB convolutional layer is 5x5, step-length 1, is filled with 2. Conv_blokC contains the convolutional layer of one, active coating a ReLU, a normalization layer BN.Conv_blokC convolutional layer Convolution kernel size be 1x1, step-length 1, be filled with 0.Inception is the complicated convolution knot of a multiple convolutional coding structure compositions Structure, structure are shown in figure.Concat is an articulamentum in Inception, by conv_blokC1, conv_blokB2, conv_ BlokC3_2, conv_blokC4, totally 4 convolution connect together.Conv_blokA1 output dimensions are 224x224x8, other Convolution block output size is shown in network structure.Conv_cls is that a convolution kernel size is 3x3, step-length 1, is filled with 1 volume Lamination.Global_max_pool is global maximum pond layer.Prob is softMax layers.
1.2nd, Stage1 models are trained.Stage1 models are a multi task models, and task is classification and recurrence.Classification is appointed Classification chart tile of being engaged in is face, returns the position that task returns two points in face bounding box upper left and bottom right.Use face Labeled data, generates the training sample of Stage1 models, and tile size zooms to 12X12, positive and negative sample proportion 1:3. The network structure of Stage1 models is shown in Fig. 3.
Conv_blok contains the convolutional layer of one, active coating a ReLU, a normalization layer BN.Conv_blok's The convolution kernel size of convolutional layer is 3x3, step-length 1, is filled with 1.Conv_blok1 output dimensions are 12x12x8, other convolution Block output size is shown in network structure.Conv_cls is that a convolution kernel size is 3x3, step-length 1, is filled with 0 convolutional layer. Fc_cls is that a convolution kernel size is 1x1, step-length 1, is filled with 0 convolutional layer.Prob is softMax layers.conv_reg That a convolution kernel size is 3x3, step-length 1, be filled with 0 convolutional layer.Fc_reg is that a convolution kernel size is 1x1, step A length of 1, it is filled with 0 convolutional layer.
1.3rd, Stage2 models are trained.Stage2 models are a multi task models, and task is classification and recurrence.Relative to Stage1 models, the classification of Stage2 models and regression capability are stronger, and model size is also bigger.Use the result of Stage1 models Multiple dimensioned scanning, training sample of scan box result of the classification fraction more than 0.5 as Stage2 models are carried out on original image This, training sample image block size zooms to 24X24, positive and negative sample proportion 1:1.Stage2 network structure is shown in Fig. 4.Mark with Fig. 3 is similar.
1.4th, Stage3 models are trained.Stage2 models are a multi task models, and task is classification and recurrence.Relative to Stage1 models and Stage2 models, its classification and regression capability is stronger, and model size is also bigger.Using Stage1 models and The result of Stage2 models obtains the training sample of Stage3 models, and the training sample input of Stage3 models is two images Block, one be Stage2 models regression result, one is that the regression results of Stage2 models diffuses into its twice of size outward Image block, two image blocks all zoom to 32X32.After the completion of Stage3 training results, then difficult example is done to Stage3 and is excavated, fine setting Model.Stage3 network structure is shown in Fig. 5.There are two data input layers data1 and data2.Concat is an articulamentum, will Conv_block1_1 and conv_block1_2 are connected in series together.Other marks are similar to Fig. 3.
Step 2: on-line checking
2.1st, input picture.
2.2nd, the size estimation of face:
2.2.1 the longest edge of input picture, is zoomed to 224, the face size estimation model trained is inputted, obtains 55 The fraction of individual face size estimation.
2.2.2, the fractions of 55 face size estimations is done window be 3 it is smooth.
2.2.3 the one-dimensional non-maxima suppression that window is 5, is done to the fraction of 55 face size estimations, obtains face chi Degree.
2.3rd, using face size estimation result, input picture is scaled, inputs Stage1 models, Stage1 models are One full convolutional network, obtains first step testing result.
2.4th, the result of Stage1 models is inputted into Stage2 models, input picture block zooms to 24X24, does and classify and return Return, obtain second step testing result.
2.5th, the results of Stage2 models is inputted into Stage3 models, inputted as two image blocks, one is Stage2 moulds The regression result of type, one is that the regression result of Stage2 models diffuses into the frame of its twice of size outward, and two image blocks all contract 32X32 is put into, is done and is classified and return, obtains the 3rd testing result.
2.6th, the detection block for obtaining Stage3 models merges, and consolidation strategy uses non-maxima suppression.
2.7th, testing result is exported.
To sum up, the method that the present invention is combined using face size estimation and concatenated convolutional neutral net does Face datection, subtracts The overall amount of calculation of few Face datection, reduces the time-consuming of Face datection entirety, and ensure the effect of Face datection.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention, should Understand, the present invention is not limited to implementation as described herein, and the purpose of these implementations description is to help this area In technical staff put into practice the present invention.

Claims (4)

1. a kind of method for detecting human face based on size estimation, it is characterised in that this method comprises the following steps:
Off-line training model and on-line checking face two parts;Wherein
Off-line training model is specifically:
Step 1.1, training face size estimation:
Step 1.1.1, original image is scaled;
Step 1.1.2, according to the wide high mean value computation face yardstick scores vector of the face on zoomed image;For an area Between, if there is the face for belonging to this interval scale, corresponding fraction is set to 1 on scores vector, is positive sample;If do not deposit Belonging to the face of this interval scale, corresponding fraction is set to 0 on scores vector, is negative sample;
Step 1.1.3, it is used as the loss function trained using the cross entropy loss function of weighting:
<mrow> <mi>L</mi> <mi>o</mi> <mi>s</mi> <mi>s</mi> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>w</mi> <mi>n</mi> </msub> <mo>&amp;lsqb;</mo> <msub> <mi>p</mi> <mi>n</mi> </msub> <mi>log</mi> <mover> <msub> <mi>p</mi> <mi>n</mi> </msub> <mo>^</mo> </mover> <mo>+</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>p</mi> <mi>n</mi> </msub> <mo>)</mo> </mrow> <mi>log</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mover> <msub> <mi>p</mi> <mi>n</mi> </msub> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>;</mo> </mrow>
Wherein Loss represents loss, and N represents Scaling interval quantity, and n represents Scaling interval sequence number, wnRepresent n-th of Scaling interval power Weight, pnRepresent the fraction of n-th of Scaling interval, pnThe fraction of n-th of Scaling interval is represented,Represent estimating for n-th Scaling interval Count result;
Step 1.1.4, face yardstick grader is trained, flip horizontal disturbance is done to image during training;
Step 1.2, training Stage1 models;Stage1 models are a full convolutional networks;Use face labeled data, generation The training sample of Stage1 models;
Step 1.3, training Stage2 models;Multiple dimensioned scanning is carried out on original image using the result of Stage1 models, point Class fraction is more than training sample of the scan box result as Stage2 models of threshold value;
Step 1.4, training Stage3 models;The instruction of Stage3 models is obtained using the result of Stage1 models and Stage2 models Practice sample, the training sample input of Stage3 models is two image blocks, one be Stage2 models regression result, one is The regression result of Stage2 models diffuses into the image block of its twice of size outward;After the completion of Stage3 training results, then it is right Stage3 does difficult example and excavated, and finely tunes model;
On-line checking face is specifically:
Step 2.1, input picture;
The size estimation of step 2.2, face:
Step 2.2.1, input picture is scaled, inputs the face size estimation model trained, obtain face size estimation Scores vector;
Step 2.2.2, the scores vector of face size estimation is done smoothly;
Step 2.2.3, non-maxima suppression is done to the scores vector of face size estimation, obtains face yardstick;
Step 2.3, using face size estimation result, input picture is scaled, inputs Stage1 models, described Stage1 Model is a full convolutional network;
Step 2.4, the result input Stage2 models by Stage1 models, do and classify and return;
Step 2.5, the results of Stage2 models inputted into Stage3 models, inputted as two image blocks, one is Stage2 moulds The regression result of type, one is that the regression result of Stage2 models diffuses into the frame of its twice of size outward, does and classifies and return;
Step 2.6, the detection block for obtaining Stage3 models merge, and consolidation strategy uses non-maxima suppression;
Step 2.7, output testing result.
A kind of 2. method for detecting human face based on size estimation according to claim 1, it is characterised in that:When needing to do essence , it is necessary to which closeer division Scaling interval, i.e. N can become very big when thin accurate size estimation;But positive sample quantity is It is basically unchanged, this can cause positive negative sample uneven, and training is difficult to restrain;The weight bigger than negative sample to positive sample plus one Training convergence can be advantageous to.
A kind of 3. method for detecting human face based on size estimation according to claim 2, it is characterised in that:Positive sample is attached Near negative sample weight is set to 0.
4. a kind of method for detecting human face based on size estimation according to any one of claim 1 to 3, its feature exist In:The task of described Stage1 models is classification and recurrence;Classification task:Classification chart tile is face, returns task: Return the position of two points in face bounding box upper left and bottom right.
CN201711294249.8A 2017-12-08 2017-12-08 A kind of method for detecting human face based on size estimation Active CN107844785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711294249.8A CN107844785B (en) 2017-12-08 2017-12-08 A kind of method for detecting human face based on size estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711294249.8A CN107844785B (en) 2017-12-08 2017-12-08 A kind of method for detecting human face based on size estimation

Publications (2)

Publication Number Publication Date
CN107844785A true CN107844785A (en) 2018-03-27
CN107844785B CN107844785B (en) 2019-09-24

Family

ID=61663261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711294249.8A Active CN107844785B (en) 2017-12-08 2017-12-08 A kind of method for detecting human face based on size estimation

Country Status (1)

Country Link
CN (1) CN107844785B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117886A (en) * 2018-08-17 2019-01-01 浙江捷尚视觉科技股份有限公司 A kind of method of target scale and region estimation in picture frame
CN109829371A (en) * 2018-12-26 2019-05-31 深圳云天励飞技术有限公司 A kind of method for detecting human face and device
CN109961006A (en) * 2019-01-30 2019-07-02 东华大学 A kind of low pixel multiple target Face datection and crucial independent positioning method and alignment schemes
CN110188720A (en) * 2019-06-05 2019-08-30 上海云绅智能科技有限公司 A kind of object detection method and system based on convolutional neural networks
CN110555334A (en) * 2018-05-30 2019-12-10 东华软件股份公司 face feature determination method and device, storage medium and electronic equipment
CN110580445A (en) * 2019-07-12 2019-12-17 西北工业大学 Face key point detection method based on GIoU and weighted NMS improvement
CN110619350A (en) * 2019-08-12 2019-12-27 北京达佳互联信息技术有限公司 Image detection method, device and storage medium
CN111241924A (en) * 2019-12-30 2020-06-05 新大陆数字技术股份有限公司 Face detection and alignment method and device based on scale estimation and storage medium
WO2020143304A1 (en) * 2019-01-07 2020-07-16 平安科技(深圳)有限公司 Loss function optimization method and apparatus, computer device, and storage medium
CN111488517A (en) * 2019-01-29 2020-08-04 北京沃东天骏信息技术有限公司 Method and device for training click rate estimation model
CN112434178A (en) * 2020-11-23 2021-03-02 北京达佳互联信息技术有限公司 Image classification method and device, electronic equipment and storage medium
CN111241924B (en) * 2019-12-30 2024-06-07 新大陆数字技术股份有限公司 Face detection and alignment method, device and storage medium based on scale estimation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912990A (en) * 2016-04-05 2016-08-31 深圳先进技术研究院 Face detection method and face detection device
CN105975961A (en) * 2016-06-28 2016-09-28 北京小米移动软件有限公司 Human face recognition method, device and terminal
CN106056101A (en) * 2016-06-29 2016-10-26 哈尔滨理工大学 Non-maximum suppression method for face detection
CN106897732A (en) * 2017-01-06 2017-06-27 华中科技大学 Multi-direction Method for text detection in a kind of natural picture based on connection word section
CN107103281A (en) * 2017-03-10 2017-08-29 中山大学 Face identification method based on aggregation Damage degree metric learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912990A (en) * 2016-04-05 2016-08-31 深圳先进技术研究院 Face detection method and face detection device
CN105975961A (en) * 2016-06-28 2016-09-28 北京小米移动软件有限公司 Human face recognition method, device and terminal
CN106056101A (en) * 2016-06-29 2016-10-26 哈尔滨理工大学 Non-maximum suppression method for face detection
CN106897732A (en) * 2017-01-06 2017-06-27 华中科技大学 Multi-direction Method for text detection in a kind of natural picture based on connection word section
CN107103281A (en) * 2017-03-10 2017-08-29 中山大学 Face identification method based on aggregation Damage degree metric learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIAOSONG CHEN ETAL.: ""A Multi-Scale Fusion Convolutional Neural Network for Face Detection"", 《IEEE》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555334A (en) * 2018-05-30 2019-12-10 东华软件股份公司 face feature determination method and device, storage medium and electronic equipment
CN110555334B (en) * 2018-05-30 2022-06-07 东华软件股份公司 Face feature determination method and device, storage medium and electronic equipment
CN109117886A (en) * 2018-08-17 2019-01-01 浙江捷尚视觉科技股份有限公司 A kind of method of target scale and region estimation in picture frame
CN109117886B (en) * 2018-08-17 2022-02-18 浙江捷尚视觉科技股份有限公司 Method for estimating target dimension and region in image frame
CN109829371B (en) * 2018-12-26 2022-04-26 深圳云天励飞技术有限公司 Face detection method and device
CN109829371A (en) * 2018-12-26 2019-05-31 深圳云天励飞技术有限公司 A kind of method for detecting human face and device
WO2020143304A1 (en) * 2019-01-07 2020-07-16 平安科技(深圳)有限公司 Loss function optimization method and apparatus, computer device, and storage medium
CN111488517A (en) * 2019-01-29 2020-08-04 北京沃东天骏信息技术有限公司 Method and device for training click rate estimation model
CN109961006A (en) * 2019-01-30 2019-07-02 东华大学 A kind of low pixel multiple target Face datection and crucial independent positioning method and alignment schemes
CN110188720A (en) * 2019-06-05 2019-08-30 上海云绅智能科技有限公司 A kind of object detection method and system based on convolutional neural networks
CN110580445A (en) * 2019-07-12 2019-12-17 西北工业大学 Face key point detection method based on GIoU and weighted NMS improvement
CN110580445B (en) * 2019-07-12 2023-02-07 西北工业大学 Face key point detection method based on GIoU and weighted NMS improvement
CN110619350A (en) * 2019-08-12 2019-12-27 北京达佳互联信息技术有限公司 Image detection method, device and storage medium
CN111241924A (en) * 2019-12-30 2020-06-05 新大陆数字技术股份有限公司 Face detection and alignment method and device based on scale estimation and storage medium
CN111241924B (en) * 2019-12-30 2024-06-07 新大陆数字技术股份有限公司 Face detection and alignment method, device and storage medium based on scale estimation
CN112434178A (en) * 2020-11-23 2021-03-02 北京达佳互联信息技术有限公司 Image classification method and device, electronic equipment and storage medium
WO2022105336A1 (en) * 2020-11-23 2022-05-27 北京达佳互联信息技术有限公司 Image classification method and electronic device

Also Published As

Publication number Publication date
CN107844785B (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN107844785A (en) A kind of method for detecting human face based on size estimation
CN111126472B (en) SSD (solid State disk) -based improved target detection method
US20200005022A1 (en) Method, terminal, and storage medium for tracking facial critical area
WO2021208502A1 (en) Remote-sensing image target detection method based on smooth bounding box regression function
CN109190442B (en) Rapid face detection method based on deep cascade convolution neural network
CN109635694B (en) Pedestrian detection method, device and equipment and computer readable storage medium
CN110188720A (en) A kind of object detection method and system based on convolutional neural networks
CN109613002B (en) Glass defect detection method and device and storage medium
CN104202547B (en) Method, projection interactive approach and its system of target object are extracted in projected picture
CN110163889A (en) Method for tracking target, target tracker, target following equipment
CN110991311A (en) Target detection method based on dense connection deep network
CN110309842B (en) Object detection method and device based on convolutional neural network
CN112528913A (en) Grit particulate matter particle size detection analytic system based on image
CN111126278B (en) Method for optimizing and accelerating target detection model for few-class scene
CN103440645A (en) Target tracking algorithm based on self-adaptive particle filter and sparse representation
CN109558815A (en) A kind of detection of real time multi-human face and tracking
CN101339661B (en) Real time human-machine interaction method and system based on moving detection of hand held equipment
CN110751195B (en) Fine-grained image classification method based on improved YOLOv3
CN106650615A (en) Image processing method and terminal
CN103208125B (en) The vision significance algorithm of color and motion global contrast in video frame images
CN112507904B (en) Real-time classroom human body posture detection method based on multi-scale features
CN106650647A (en) Vehicle detection method and system based on cascading of traditional algorithm and deep learning algorithm
CN114972312A (en) Improved insulator defect detection method based on YOLOv4-Tiny
CN109191498A (en) Object detection method and system based on dynamic memory and motion perception
CN109636764A (en) A kind of image style transfer method based on deep learning and conspicuousness detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230428

Address after: Room 319-2, 3rd Floor, Building 2, No. 262 Wantang Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Patentee after: Hangzhou Jieshang Safety Equipment Co.,Ltd.

Address before: 311121 East Building, building 7, No. 998, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee before: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right