CN108509929A - Method, method for detecting human face and device for generating Face datection model - Google Patents

Method, method for detecting human face and device for generating Face datection model Download PDF

Info

Publication number
CN108509929A
CN108509929A CN201810315220.1A CN201810315220A CN108509929A CN 108509929 A CN108509929 A CN 108509929A CN 201810315220 A CN201810315220 A CN 201810315220A CN 108509929 A CN108509929 A CN 108509929A
Authority
CN
China
Prior art keywords
face
head
trunk
feature
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810315220.1A
Other languages
Chinese (zh)
Inventor
何泽强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810315220.1A priority Critical patent/CN108509929A/en
Publication of CN108509929A publication Critical patent/CN108509929A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The embodiment of the present application discloses method, method for detecting human face and device for generating Face datection model.One specific implementation mode of this method includes:Obtain Initial Face detection model and as current face's detection model;Obtain sample set;Execute following training step:Sample set is inputted into current face's detection model, obtains the facial feature information, head feature information and trunk characteristic information of this facial image of various kinds;Face feature penalty values, head feature penalty values and trunk characteristic loss value are determined respectively;Determine total losses value;By total losses value backpropagation in current face's detection model, to update the parameter of current face's detection model, updated Face datection model is obtained.The embodiment of the present application can improve the accuracy and recall rate of Face datection model.

Description

Method, method for detecting human face and device for generating Face datection model
Technical field
The invention relates to field of computer technology, and in particular to method, people for generating Face datection model Face detecting method and device.
Background technology
Face datection is an important link in automatic recognition system, which is more and more widely used. It typically refers to the image given for any one width, use certain strategy to scan for it with determine whether containing Face.If containing face, position, size and posture of face etc. can be returned.
Existing Face datection needs to carry out by trained neural network.Specifically, Face datection is that image is defeated Enter in neural network, to obtain the Face datection result of image.
Invention content
The embodiment of the present application proposes method for generating Face datection model and method for detecting human face and device.
In a first aspect, the embodiment of the present application provides a kind of method for generating Face datection model, including:It obtains just Beginning Face datection model, using acquired Initial Face detection model as current face's detection model;Sample set is obtained, In, the sample in sample set includes sample facial image and markup information, and markup information is for marking sample facial image institute Including face, head and trunk;Using sample set, following training step is executed to current face's detection model:By sample Collection input current face's detection model, facial feature information, head feature information and the trunk for obtaining this facial image of various kinds are special Reference ceases;Determine respectively facial feature information, head feature information and trunk characteristic information in markup information face's mark, Head marks face feature penalty values, head feature penalty values and the trunk characteristic loss value between trunk mark;Based on face The weighted sum of portion's characteristic loss value, head feature penalty values and trunk characteristic loss value, determines total losses value;Total losses value is existed Backpropagation in current face's detection model obtains updated Face datection to update the parameter of current face's detection model Model;It is less than preset penalty values threshold value in response to the corresponding total losses value of updated Face datection model, after update Face datection model be determined as generated Face datection model.
In some embodiments, facial feature information, head feature information and trunk characteristic information and mark are determined respectively Face's mark, head mark in information and face feature penalty values, head feature penalty values and trunk between trunk mark Characteristic loss value, including:Determine facial feature information, head feature information and trunk characteristic information in various kinds this facial image Corresponding face location, head position and trunk position separately includes the probability of face, head and trunk;Face is determined respectively Between face, head and trunk that the markup information of position, head position and trunk position and sample facial image is marked Deviation;Based on identified probability and deviation, determine respectively facial feature information, head feature information and trunk characteristic information with In markup information face mark, head mark trunk mark between face feature penalty values, head feature penalty values and Trunk characteristic loss value.
In some embodiments, when based on face feature penalty values, head feature penalty values and trunk characteristic loss value Weighted sum, when determining total losses value, the weight of face feature penalty values is more than the weight of head feature penalty values, head feature damage The weight of mistake value is more than the weight of trunk characteristic loss value.
In some embodiments, training step further includes:In response to the corresponding total damage of updated Face datection model Mistake value is not less than preset penalty values threshold value, using updated Face datection model as current face's detection model, and continues Execute training step.
Second aspect, the embodiment of the present application provide a kind of method for detecting human face, including:Obtain target facial image;It will In target facial image input Face datection model trained in advance, facial feature information is obtained;Wherein, face trained in advance Detection model is the Face datection model that the method for first aspect any one generates.
The third aspect, the embodiment of the present application provide a kind of device for generating Face datection model, including:First obtains Unit is taken, is configured to obtain Initial Face detection model, be examined acquired Initial Face detection model as current face Survey model;Second acquisition unit is configured to obtain sample set, wherein the sample in sample set include sample facial image with And markup information, markup information is for marking face, head and trunk that sample facial image is included;Training unit, the instruction Practicing unit includes:Subelement is inputted, is configured to sample set inputting current face's detection model, obtains various kinds this facial image Facial feature information, head feature information and trunk characteristic information;First determination subelement is configured to determine face respectively Characteristic information, head feature information and trunk characteristic information are marked with face's mark, head mark and the trunk in markup information Between face feature penalty values, head feature penalty values and trunk characteristic loss value;Second determination subelement, is configured to base In face feature penalty values, the weighted sum of head feature penalty values and trunk characteristic loss value, total losses value is determined;Update is single Member is configured to the backpropagation in current face's detection model of total losses value, to update the ginseng of current face's detection model Number, obtains updated Face datection model;Subelement is generated, is configured in response to updated Face datection model pair The total losses value answered is less than preset penalty values threshold value, and updated Face datection model is determined as generated Face datection Model.
In some embodiments, the first determination subelement, including:Probability determination module is configured to determine face feature Information, head feature information and trunk characteristic information face location corresponding in various kinds this facial image, head position and Trunk position separately includes the probability of face, head and trunk;Deviation determining module, be configured to determine respectively face location, Deviation between face, head and trunk that the markup information of head position and trunk position and sample facial image is marked; Determining module is lost, is configured to based on identified probability and deviation, determines facial feature information, head feature information respectively In trunk characteristic information and markup information face mark, head mark trunk mark between face feature penalty values, Head feature penalty values and trunk characteristic loss value.
In some embodiments, the weight of face feature penalty values is more than the weight of head feature penalty values, head feature The weight of penalty values is more than the weight of trunk characteristic loss value.
In some embodiments, training unit further includes:Model modification subelement is configured in response to updated people Total losses value corresponding to face detection model is not less than preset penalty values threshold value, using updated Face datection model as working as Preceding face detection model, and current face's detection model is inputted into training unit.
Fourth aspect, the embodiment of the present application provide a kind of human face detection device, including:Target Acquisition unit, configuration are used In acquisition target facial image;Feature acquiring unit is configured to target facial image input Face datection trained in advance In model, facial feature information is obtained;Wherein, Face datection model trained in advance is being used for for any one of third aspect Generate the Face datection model that the device of Face datection model generates.
5th aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress It sets, for storing one or more programs, when one or more programs are executed by one or more processors so that one or more A processor realizes the method such as any embodiment in the method for generating Face datection model.
6th aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the method such as any embodiment in the method for generating Face datection model when the program is executed by processor.
7th aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress It sets, for storing one or more programs, when one or more programs are executed by one or more processors so that one or more A processor realizes the method such as embodiment in method for detecting human face.
Eighth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the method such as embodiment in method for detecting human face when the program is executed by processor.
Method provided by the embodiments of the present application for generating Face datection model passes through:First, Initial Face inspection is obtained Model is surveyed, using acquired Initial Face detection model as current face's detection model.Later, sample set is obtained, wherein sample The sample of this concentration includes sample facial image and markup information, and markup information is included for marking sample facial image Face, head and trunk.Then, using sample set, following training step is executed to current face's detection model:Step 1, will Sample set inputs current face's detection model, obtains the facial feature information, head feature information and body of this facial image of various kinds Dry characteristic information;Step 2, respectively in determining facial feature information, head feature information and trunk characteristic information and markup information Face mark, head mark trunk mark between face feature penalty values, head feature penalty values and trunk feature damage Mistake value;Step 3, the weighted sum based on face feature penalty values, head feature penalty values and trunk characteristic loss value determines total damage Mistake value;Step 4, by the backpropagation in current face's detection model of total losses value, to update the ginseng of current face's detection model Number, obtains updated Face datection model;Step 5, in response to the corresponding total losses value of updated Face datection model Less than preset penalty values threshold value, updated Face datection model is determined as to generated Face datection model.The application The information of the information in sample in relation to face and the head other than face may be used in embodiment and the information of trunk generates people Face detection model can improve the accuracy and recall rate of Face datection model.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating Face datection model of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating Face datection model of the application;
Fig. 4 is the flow chart according to another embodiment of the method for generating Face datection model of the application;
Fig. 5 is the flow chart according to one embodiment of the method for detecting human face of the application;
Fig. 6 is the structural schematic diagram according to one embodiment of the device for generating Face datection model of the application;
Fig. 7 is the structural schematic diagram according to one embodiment of the human face detection device of the application;
Fig. 8 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the application for generating the method for Face datection model or for generating Face datection The exemplary system architecture 100 of the embodiment of the device of model.
As shown in Figure 1, system architecture 100 may include terminal 101,102,103, network 104 and server 105.Network 104 between terminal 101,102,103 and server 105 provide communication link medium.Network 104 may include various Connection type, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal 101,102,103, be disappeared with receiving or sending Breath etc..Various telecommunication customer end applications, such as picture processing application, recognition of face can be installed in terminal 101,102,103 Class application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Here terminal 101,102,103 can be hardware, can also be software.When terminal 101,102,103 is hardware When, can be the various electronic equipments with display screen, including but not limited to smart mobile phone, tablet computer, E-book reader, Pocket computer on knee and desktop computer etc..When terminal 101,102,103 is software, may be mounted at above-mentioned listed In the electronic equipment of act.Multiple softwares or software module may be implemented into (such as providing the multiple soft of Distributed Services in it Part or software module), single software or software module can also be implemented as.It is not specifically limited herein.
When terminal 101,102,103 is hardware, it is also equipped with image capture device thereon.Image capture device can To be the various equipment that can realize acquisition image function, such as camera, sensor.User can utilize terminal 101,102, Image capture device on 103, to acquire itself or other people facial image.
Server 105 can be to provide the server of various services, such as the picture to being shown in terminal 101,102,103 The background server supported is provided.Background server can carry out the data such as the sample set that receives the processing such as to analyze, and will Handling result (such as Face datection model of generation) feeds back to terminal device.
It should be noted that the method for generating Face datection model that the embodiment of the present application is provided can be by servicing Device 105 or terminal 101,102,103 execute, and correspondingly, the device for generating Face datection model is generally positioned at server 105 or terminal 101,102,103 in.
It should be understood that the number of the terminal, network and server in Fig. 1 is only schematical.It, can according to needs are realized With with any number of terminal, network and server.
With continued reference to Fig. 2, one embodiment of the method for generating Face datection model according to the application is shown Flow 200.The method for being used to generate Face datection model, includes the following steps:
Step 201, Initial Face detection model is obtained, is examined acquired Initial Face detection model as current face Survey model.
In the present embodiment, executive agent (such as the service shown in FIG. 1 of the method for generating Face datection model Device) Initial Face detection model can be obtained.Also, detect mould using acquired Initial Face detection model as current face Type.Face datection model is used for the face in detection image.
In practice, Initial Face detection model can be the existing various nerves created based on machine learning techniques Network model.The neural network model can have existing various neural network structures (such as DenseBox, VGGNet, ResNet, SegNet etc.).The storage location of current face's detection model is not restricted in this application.
Step 202, sample set is obtained, wherein the sample in sample set includes sample facial image and markup information, mark Note information is for marking face, head and trunk that sample facial image is included.
In the present embodiment, above-mentioned executive agent can obtain sample set, and therefrom choose sample.Markup information is to sample The information that accurate face location, head position and trunk position are marked included in this facial image.Above-mentioned execution master Body is labeled face, head and trunk respectively.For example, above-mentioned executive agent may be used rectangle frame delimit face, Position where head and trunk difference.Specifically, two at least one set of diagonal coordinate of delimited position may be used A abscissa value and two ordinate values indicate delimited position, for example, the information marked to face can be expressed as (x1,y1,x2,y2), (x can be expressed as to the information of head mark3,y3,x4,y4), the information of trunk mark can be expressed as (x5,y5,x6,y6).It should be noted that above-mentioned coordinate is the coordinate under the rectangular coordinate system pre-established.
Using sample set, following training step 203 is executed to current face's detection model:
In the present embodiment, step 203 resolves into 5 following sub-steps, i.e., sub-step 2031, sub-step 2032, Sub-step 2033, sub-step 2034 and sub-step 2035.
Sample set is inputted current face's detection model, obtains the face feature of this facial image of various kinds by sub-step 2031 Information, head feature information and trunk characteristic information.
In the sub-step, sample set can be inputted current face's detection model by above-mentioned executive agent, can be with by model Obtain characteristic pattern.The sample facial image of selection is inputted into current face's detection model, obtains the face of this facial image of various kinds Characteristic information, head feature information and trunk characteristic information.Initial Face detection model can be extracted from sample facial image The feature of image.Wherein, facial feature information can be the information for characterizing face feature in image.Such as face feature letter Breath may include the location information etc. of face in the picture.Head feature information can be for characterizing head feature in image Information.Trunk characteristic information can be the information for characterizing trunk feature in image.
In practice, head position includes often face location, so also including often to have the face in head feature information Portion's characteristic information.Trunk position includes often head position, so also including often head feature in trunk characteristic information Information.In addition, for the training of implementation model, the facial feature information, head feature information and the trunk feature letter that obtain here Breath, generally can be with markup information representation having the same.As facial feature information can be expressed as (x1,y1,x2,y2), head Portion's characteristic information can be expressed as (x3,y3,x4,y4), trunk characteristic information can be expressed as (x5,y5,x6,y6)。
Sub-step 2032 determines facial feature information, head feature information and trunk characteristic information and markup information respectively In face mark, head mark trunk mark between face feature penalty values, head feature penalty values and trunk feature Penalty values.
In the sub-step, above-mentioned executive agent can determine respectively above-mentioned facial feature information, head feature information and Face's mark, head mark in trunk characteristic information and markup information and the face feature penalty values between trunk mark, head Portion's characteristic loss value and trunk characteristic loss value.Such as parameter, input can be used as to refer to facial feature information and face's mark In fixed loss function (loss function), so as to which penalty values between the two are calculated.
In practice, loss function is typically the predicted value (such as facial feature information) and actual value for being used for estimating model (such as face mark) it is different degrees of.In general, loss function is a non-negative real-valued function.In practice, loss function It can be arranged according to actual demand.
Sub-step 2033, the weighted sum based on face feature penalty values, head feature penalty values and trunk characteristic loss value, Determine total losses value.
In the sub-step, above-mentioned executive agent can be special to face feature penalty values, head feature penalty values and trunk Sign penalty values are weighted, and obtain weighted sum, and directly using obtained weighted sum as total losses value.In addition, above-mentioned execution Obtained weighted sum can also be inputted preset formula, model or is multiplied with predetermined coefficient by main body, to be calculated Total losses value.
In some optional realization methods of the present embodiment, the weight of face feature penalty values can be more than head feature The weight of penalty values, the weight of head feature penalty values are more than the weight of trunk characteristic loss value.
In the present embodiment, face feature penalty values are arranged with larger weight, can comprehensively consider sample people In the case of face, head and trunk in face image, increase the influence of face, people is detected to enhance current face's detection model The sensitivity of face.
In addition it is also possible to by the weight of face feature penalty values, the weight of head feature penalty values and trunk characteristic loss The weight of value is set as equal.
Sub-step 2034, by the backpropagation in current face's detection model of total losses value, to update current face's detection The parameter of model obtains updated Face datection model.
In the sub-step, above-mentioned executive agent can be by the backpropagation in current face's detection model of total losses value (back propagation), is updated the parameter in Face datection model, obtains updated Face datection model.This In parameter can be various parameters in current face's detection model, such as each point of convolution kernel in current face's detection model The numerical value of amount.Above-mentioned executive agent is by face feature penalty values, the weighted sum of head feature penalty values and trunk characteristic loss value It is determined as total losses value, in training current face's detection model, facial feature information is not only considered, also by head feature The influence factor of information and trunk characteristic information as current face's detection model.Above-mentioned executive agent utilizes backpropagation, can To be trained to current face's detection model, so that the model can more accurately detect face.
Sub-step 2035 is less than preset penalty values in response to the corresponding total losses value of updated Face datection model Updated Face datection model is determined as generated Face datection model by threshold value.
In the sub-step, above-mentioned executive agent in response to updated Face datection model output and markup information it Between total losses value be less than preset penalty values threshold value, it may be determined that current face's detection model train completions, and will update Face datection model afterwards is determined as generated Face datection model.As an example, if being chosen in step 202 has multiple samples This, then in the case where the total losses value of each sample is respectively less than preset penalty values threshold value, executive agent, which can determine, works as Preceding face detection model training is completed.For another example above-mentioned executive agent, which can count total losses value, is less than preset penalty values threshold The sample of value accounts for the ratio of sample set.And reach default sample proportion (such as 95%) in the ratio, it may be determined that current face examines Model training is surveyed to complete.
Preset penalty values threshold value can be generally used for indicate predicted value (i.e. facial feature information, head feature information and Trunk characteristic information) true value (face mark, head mark and trunk mark) between inconsistent degree ideal situation. That is when total losses value is less than preset penalty values threshold value, it is believed that predicted value nearly or approximately true value.Preset damage Mistake value threshold value can be arranged according to actual demand.
It is the application scenarios according to the method for generating Face datection model of the present embodiment with continued reference to Fig. 3, Fig. 3 One schematic diagram.In the application scenarios of Fig. 3, electronic equipment 301 obtains Initial Face inspection from local or other electronic equipments Model 302 is surveyed, using acquired Initial Face detection model as current face's detection model 303;Sample set 304 is obtained, In, the sample in sample set includes sample facial image and markup information, and markup information is for marking sample facial image institute Including face, head and trunk;Using sample set, following training step is executed to current face's detection model:By sample Collection input current face detection model 303, obtains the facial feature information, head feature information and trunk of this facial image of various kinds Characteristic information 305;Facial feature information, head feature information and trunk characteristic information and the face in markup information are determined respectively Face feature penalty values, head feature penalty values and trunk characteristic loss value between mark, head mark and trunk mark 306;Weighted sum based on face feature penalty values, head feature penalty values and trunk characteristic loss value, determines total losses value 307;The backpropagation in current face's detection model of total losses value is obtained with updating the parameter of current face's detection model Updated Face datection model 308;It is less than in response to the corresponding total losses value of updated Face datection model preset Updated Face datection model is determined as generated Face datection model 309 by penalty values threshold value.
The letter on the information in sample in relation to face and the head other than face may be used in above-described embodiment of the application The information of breath and trunk generates Face datection model, can improve the accuracy and recall rate of Face datection model.Using being related to The total losses value of face feature penalty values, head feature penalty values and trunk characteristic loss value carries out current face's detection model Backpropagation adjusts the parameter of the model, to further increase the accuracy of Face datection model.
With further reference to Fig. 4, it illustrates the flows of another embodiment of the method for generating Face datection model 400.This is used to generate the flow 400 of the method for Face datection model, includes the following steps:
Step 401, Initial Face detection model is obtained, is examined acquired Initial Face detection model as current face Survey model.
In the present embodiment, the method for generating Face datection model runs electronic equipment (such as Fig. 1 institutes thereon The server shown) Initial Face detection model can be obtained.Also, using acquired Initial Face detection model as working as forefathers Face detection model.Face datection model is used for the face in detection image.
In practice, Initial Face detection model can be the existing various nerves created based on machine learning techniques Network model.The neural network model can have existing various neural network structures (such as DenseBox, VGGNet, ResNet, SegNet etc.).The storage location of current face's detection model is not restricted in this application.
Step 402, sample set is obtained, wherein the sample in sample set includes sample facial image and markup information, mark Note information is for marking face, head and trunk that sample facial image is included.
In the present embodiment, above-mentioned electronic equipment can obtain sample set, and therefrom choose sample.Markup information is to sample The information that accurate face location, head position and trunk position are marked included in this facial image.Above-mentioned electronics is set It is standby that face, head and trunk are labeled respectively.For example, above-mentioned electronic equipment may be used rectangle frame delimit face, Position where head and trunk difference.
Execute following training step 403:
In the present embodiment, step 403 resolves into 6 following sub-steps, i.e., sub-step 4031, sub-step 4032, Sub-step 4033, sub-step 4034, sub-step 4035, sub-step 4036, sub-step 4037 and sub-step 4038.
Sample set is inputted current face's detection model, obtains the face feature of this facial image of various kinds by sub-step 4031 Information, head feature information and trunk characteristic information.
In the sub-step, above-mentioned executive agent, which can input the sample facial image chosen from sample set, works as forefathers Face detection model can obtain characteristic pattern by model.The sample facial image of selection is inputted into current face's detection model, is obtained The facial feature information, head feature information and trunk characteristic information of this facial image of various kinds.Initial Face detection model can be with The feature of image is extracted from sample facial image.Wherein, facial feature information can be for characterizing face feature in image Information.Head feature information can be the information for characterizing head feature in image.Trunk characteristic information can be used for Characterize the information of trunk feature in image.
Sub-step 4032 determines facial feature information, head feature information and trunk characteristic information in various kinds this face figure Corresponding face location, head position and trunk position separately include the probability of face, head and trunk as in.
In the sub-step, above-mentioned facial feature information, head feature information and trunk characteristic information are in this face of various kinds There are corresponding face location, head position and trunk position in image.Above-mentioned executive agent can determine above-mentioned face position Set comprising face probability and above-mentioned head position include head probability and trunk position include trunk probability.In reality In trampling, above-mentioned probability can be obtained and exported by current face's detection model.The maximum value of the numberical range of probability indicates face Characteristic information face location corresponding in various kinds this facial image includes accurately face.
Sub-step 4033 determines that face location, head position and trunk position and the mark of sample facial image are believed respectively Deviation between the marked face of breath, head and trunk.
In the sub-step, above-mentioned executive agent can determine face that above-mentioned face location and markup information are marked it Between deviation (offset), determine the deviation between the head that above-mentioned head position and markup information are marked, and determination is above-mentioned Deviation between the trunk that trunk position and markup information are marked.Here markup information is the mark of above-mentioned sample facial image Note information.
Specifically, the face of mark, head and trunk can be considered as true value.The face position that Face datection model delimited Set, be extremely difficult between head position and trunk position and true value it is completely the same.It is usually the case that facial feature information, Head feature information and trunk characteristic information face location, head position and trunk position corresponding in sample facial image All there is deviation between true value.
Sub-step 4034, based on identified probability and deviation, respectively determine facial feature information, head feature information and Face's mark, head mark in trunk characteristic information and markup information and the face feature penalty values between trunk mark, head Portion's characteristic loss value and trunk characteristic loss value.
In the sub-step, above-mentioned executive agent determines above-mentioned face feature respectively based on identified probability and deviation Head between face feature penalty values, above-mentioned head feature information between information and above-mentioned face mark and above-mentioned head mark Trunk characteristic loss value between portion's characteristic loss value and above-mentioned trunk characteristic information and above-mentioned trunk mark.
Above-mentioned executive agent may be used the loss function in step 2032 and determine above-mentioned each penalty values.Specifically, Above-mentioned determine the probability confidence level can be utilized to lose (confidence loss), and positioning loss is determined using above-mentioned deviation (localization loss).Confidence level loss and the characteristic loss value that the weighted sum that positioning is lost is a part are determined later. For example, the probability that the face location that facial feature information can be utilized corresponding in sample facial image includes face, determines Confidence level is lost.And determine positioning loss using the deviation between above-mentioned face location and the face marked.It is above-mentioned later to hold The weight for the weight and positioning loss that row main body is lost according to pre-set confidence level, determines the confidence level of facial feature information The weighted sum of loss and positioning loss, and using the weighted sum as face feature penalty values.
Sub-step 4035, the weighted sum based on face feature penalty values, head feature penalty values and trunk characteristic loss value, Determine total losses value.
In the sub-step, above-mentioned executive agent determines face feature penalty values, head feature penalty values and trunk feature The weighted sum of penalty values, and total losses value is determined based on the weighted sum.Specifically, above-mentioned executive agent can damage face feature Mistake value, head feature penalty values and trunk characteristic loss value are weighted, and obtain weighted sum.Later, above-mentioned executive agent can be with The weighted sum is determined as total losses value, which can also be inputted preset formula, model or with predetermined coefficient phase Multiply, to obtain total losses value.
It specifically, can be according to actual conditions, in advance to face feature penalty values, head feature penalty values and trunk feature Weight is set separately in penalty values, in order to obtain weighted sum.
In some optional realization methods of the present embodiment, the weight of face feature penalty values is lost more than head feature The weight of value, the weight of head feature penalty values are more than the weight of trunk characteristic loss value.
Sub-step 4036, by the backpropagation in current face's detection model of total losses value, to update current face's detection The parameter of model obtains updated Face datection model.
In the sub-step, above-mentioned executive agent can by the backpropagation in current face's detection model of total losses value, Parameter in Face datection model is updated, updated Face datection model is obtained.Here parameter can be current Various parameters in Face datection model.Above-mentioned executive agent not only considers face in training current face's detection model Portion's characteristic information, also using head feature information and trunk characteristic information as the influence factor of current face's detection model.It is above-mentioned Executive agent utilizes backpropagation, can be trained to current face's detection model, so that the model can be examined more accurately Survey face.
Sub-step 4037 is less than preset penalty values in response to the corresponding total losses value of updated Face datection model Updated Face datection model is determined as generated Face datection model by threshold value.
In the sub-step, output and markup information of the above-mentioned executive agent in response to updated Face datection model Total losses value is less than preset penalty values threshold value, then can determine that current face's detection model has trained completion, and will be after update Face datection model be determined as generated Face datection model.As an example, the total losses value in each sample is respectively less than In the case of preset penalty values threshold value, executive agent can determine that current face's detection model training is completed.For another example executing Main body can count the ratio that total losses value accounts for sample set less than the sample of preset penalty values threshold value.And reach pre- in the ratio If sample proportion (such as 95%), it may be determined that current face's detection model training is completed.
Sub-step 4038 is not less than preset loss in response to the corresponding total losses value of updated Face datection model It is worth threshold value, using updated Face datection model as current face's detection model, and continues to execute training step.
In the sub-step, output and markup information of the above-mentioned executive agent in response to updated Face datection model Total losses value is not less than preset penalty values threshold value, then can determine that current face's detection model not complete by training, after update Face datection model as current face's detection model.Later, it above-mentioned execution and is continued to execute using the specified sample set Training step.When executing training step next time, above-mentioned executive agent can be inputted into current face's detection model should Specified sample set.As an example, the total losses value of each sample no less than preset penalty values threshold value in the case of, hold Row main body can determine that current face's detection model not complete by training.It is less than for another example executive agent can count total losses value The sample of preset penalty values threshold value accounts for the ratio of the sample of selection.And if the ratio not up to presets sample proportion (such as 95%), it may be determined that current face's detection model not complete by training.
The present embodiment more precisely adjusts current face's detection model by setting probability and deviation, with into One step improves the accuracy of the Face datection model inspection face generated.In addition, in total losses value no less than preset penalty values In the case of threshold value, the Face datection model generated to face does further training, can make the testing result of the model more Close to true value.
With further reference to Fig. 5, it illustrates the flows 500 of one embodiment of the method for Face datection.The Face datection Method flow 500, include the following steps:
Step 501, target facial image is obtained.
In the present embodiment, the electronic equipment (such as server shown in FIG. 1) of the method operation of Face datection thereon Target facial image can be obtained from local or other electronic equipments.Here facial image is the image for presenting face.
Step 502, by target facial image input Face datection model trained in advance, facial feature information is obtained; Wherein, Face datection model trained in advance is the method for any one of embodiment shown in Fig. 2 or embodiment shown in Fig. 4 The Face datection model of generation.
In the present embodiment, above-mentioned executive agent inputs target facial image in Face datection model trained in advance, Obtain testing result.Testing result is facial feature information, such as face's frame (x, y, w, h).Wherein, face trained in advance Detection model is the updated face that the method for any one of embodiment shown in Fig. 2 or embodiment shown in Fig. 4 obtains Detection model.
The present embodiment uses the Face datection model for adjusting parameter by backpropagation to carry out Face datection, can mention The accuracy of testing result and recall rate.
With further reference to Fig. 6, this application provides a kind of one embodiment for generating the device of Face datection model, The device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied in various electronic equipments.
As shown in fig. 6, the device for generating Face datection model of the present embodiment, including:First acquisition unit 601, Second acquisition unit 602 and training unit 603.Wherein, first acquisition unit 601 are configured to obtain Initial Face detection mould Type, using acquired Initial Face detection model as current face's detection model;Second acquisition unit 602 is configured to obtain Take sample set, wherein the sample in sample set includes sample facial image and markup information, and markup information is for marking sample Face, head and the trunk that facial image is included;Training unit 603 includes:Subelement 6031 is inputted, is configured to sample Collection input current face's detection model, facial feature information, head feature information and the trunk for obtaining this facial image of various kinds are special Reference ceases;First determination subelement 6032 is configured to determine facial feature information, head feature information and trunk feature respectively Face's mark, head mark in information and markup information and the face feature penalty values between trunk mark, head feature damage Mistake value and trunk characteristic loss value;Second determination subelement 6033 is configured to damage based on face feature penalty values, head feature The weighted sum of mistake value and trunk characteristic loss value, determines total losses value;Subelement 6034 is updated, is configured to exist total losses value Backpropagation in current face's detection model obtains updated Face datection to update the parameter of current face's detection model Model;Training unit further includes:Subelement 6035 is generated, is configured to be less than preset penalty values threshold in response to total losses value Updated Face datection model, is determined as generated Face datection model by value.
In the present embodiment, first acquisition unit 601 can obtain Initial Face detection model.Also, it will be acquired Initial Face detection model is as current face's detection model.Face datection model is used for the face in detection image.
In the present embodiment, second acquisition unit 602 can obtain sample set, and therefrom choose sample.Markup information is To the information that accurate face location, head position and trunk position are marked included in sample facial image.It is above-mentioned to hold Row main body is labeled face, head and trunk respectively.For example, above-mentioned executive agent may be used rectangle frame and delimit face Position where portion, head and trunk difference.Specifically, at least one set of diagonal coordinate that delimited position may be used carrys out table Show delimited position.
In the present embodiment, input subelement 6031 can input the sample facial image chosen from sample set current Face datection model can obtain characteristic pattern by model.The sample facial image of selection is inputted into current face's detection model, is obtained To the facial feature information of this facial image of various kinds, head feature information and trunk characteristic information.Initial Face detection model can To extract the feature of image from sample facial image.Wherein, facial feature information can be special for characterizing face in image The information of sign.Head feature information can be the information for characterizing head feature in image.Trunk characteristic information can be used In the information for characterizing trunk feature in image.
In the present embodiment, the first determination subelement 6032 can determine above-mentioned facial feature information, head feature respectively Face's mark, head mark in information and trunk characteristic information and markup information and the face feature loss between trunk mark Value, head feature penalty values and trunk characteristic loss value.Such as facial feature information and face's mark can be regard as parameter, it is defeated Enter in specified loss function, so as to which penalty values between the two are calculated.
In the present embodiment, the second determination subelement 6033 determines face feature penalty values, head feature penalty values and body The weighted sum of dry characteristic loss value, and total losses value is determined based on the weighted sum.Specifically, above-mentioned executive agent can be to face Characteristic loss value, head feature penalty values and trunk characteristic loss value are weighted, and obtain weighted sum.Later, above-mentioned execution master The weighted sum can be determined as total losses value by body, which can also be inputted preset formula, model or with it is default Multiplication, to obtain total losses value.
In the present embodiment, update subelement 6034 can reversely pass total losses value in current face's detection model It broadcasts, the parameter in Face datection model is updated, current face's detection model after being updated.Here parameter can be The weight of each convolutional layer in various parameters in current face's detection model, such as Initial Face detection model.Above-mentioned execution Main body not only considers facial feature information, also by head feature information and trunk in training current face's detection model Influence factor of the characteristic information as current face's detection model.Using backpropagation, can to current face's detection model into Row training, so that the model can more accurately detect face.
In the present embodiment, it generates subelement 604 and is less than preset penalty values threshold value in response to total losses value, it may be determined that Current face's detection model has trained completion, and updated Face datection model is determined as to generated Face datection mould Type.
In some optional realization methods of the present embodiment, the first determination subelement, including:Probability determination module is matched Set the face corresponding in various kinds this facial image for determining facial feature information, head feature information and trunk characteristic information Portion position, head position and trunk position separately include the probability of face, head and trunk;Deviation determining module, is configured to The face, head that the markup information of face location, head position and trunk position and sample facial image marked are determined respectively Deviation between trunk;Determining module is lost, is configured to based on identified probability and deviation, determines face feature respectively Between face's mark, head mark in information, head feature information and trunk characteristic information and markup information and trunk mark Face feature penalty values, head feature penalty values and trunk characteristic loss value.
In some optional realization methods of the present embodiment, the weight of face feature penalty values is lost more than head feature The weight of value, the weight of head feature penalty values are more than the weight of trunk characteristic loss value.
In some optional realization methods of the present embodiment, which further includes:Model modification subelement, configuration For being not less than preset penalty values threshold value in response to total losses value, examined updated Face datection model as current face Model is surveyed, and current face's detection model is inputted into training unit.
With further reference to Fig. 7, this application provides a kind of one embodiment of the device of Face datection, the device embodiments Corresponding with embodiment of the method shown in fig. 5, which specifically can be applied in various electronic equipments.
As shown in fig. 7, the device 700 of the Face datection of the present embodiment includes:Target Acquisition unit 701 and feature obtain single Member 702.Wherein, Target Acquisition unit 701 is configured to obtain target facial image;Feature acquiring unit 702, is configured to By in target facial image input Face datection model trained in advance, facial feature information is obtained;Wherein, people trained in advance Face detection model is the updated Face datection model that the method in embodiment corresponding to Fig. 5 obtains.
In the present embodiment, Target Acquisition unit 701 can obtain target face figure from local or other electronic equipments Picture.Here facial image is the image for presenting face.
In the present embodiment, target facial image is inputted Face datection model trained in advance by feature acquiring unit 702 In, obtain testing result.Testing result is facial feature information.Wherein, Face datection model trained in advance is shown in Fig. 2 The updated Face datection model that the method for any one of embodiment or embodiment shown in Fig. 4 obtains.
Below with reference to Fig. 8, it illustrates the computer systems 800 suitable for the electronic equipment for realizing the embodiment of the present application Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, to the function of the embodiment of the present application and should not use model Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and Execute various actions appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data. CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always Line 804.
It is connected to I/O interfaces 805 with lower component:Importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 808 including hard disk etc.; And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via such as because The network of spy's net executes communication process.Driver 810 is also according to needing to be connected to I/O interfaces 805.Detachable media 811, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 810, as needed in order to be read from thereon Computer program be mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed by communications portion 809 from network, and/or from detachable media 811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes Above-mentioned function.It should be noted that the computer-readable medium of the application can be computer-readable signal media or calculating Machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but it is unlimited In --- electricity, system, device or the device of magnetic, optical, electromagnetic, infrared ray or semiconductor, or the arbitrary above combination.It calculates The more specific example of machine readable storage medium storing program for executing can include but is not limited to:Being electrically connected, be portable with one or more conducting wires Formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In this application, can be any include computer readable storage medium or storage program Tangible medium, the program can be commanded execution system, device either device use or it is in connection.And in this Shen Please in, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, In carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by Instruction execution system, device either device use or program in connection.The journey for including on computer-readable medium Sequence code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned Any appropriate combination.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include first acquisition unit, second acquisition unit and training unit.Wherein, the title of these units not structure under certain conditions The restriction of the pairs of unit itself, for example, first acquisition unit is also described as " obtaining the list of Initial Face detection model Member ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should Device:Initial Face detection model is obtained, using acquired Initial Face detection model as current face's detection model;It obtains Sample set, wherein the sample in sample set includes sample facial image and markup information, and markup information is for marking sample people Face, head and the trunk that face image is included;Execute following training step:Sample set input current face is detected into mould Type obtains the facial feature information, head feature information and trunk characteristic information of this facial image of various kinds;Determine that face is special respectively Reference breath, head feature information and trunk characteristic information mark it with face's mark, head mark and the trunk in markup information Between face feature penalty values, head feature penalty values and trunk characteristic loss value;Based on face feature penalty values, head feature The weighted sum of penalty values and trunk characteristic loss value, determines total losses value;Total losses value is anti-in current face's detection model To propagation, to update the parameter of current face's detection model, updated Face datection model is obtained;In response to updated people Total losses value corresponding to face detection model is less than preset penalty values threshold value, and updated Face datection model is determined as institute The Face datection model of generation.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (14)

1. a kind of method for generating Face datection model, including:
Initial Face detection model is obtained, using acquired Initial Face detection model as current face's detection model;
Obtain sample set, wherein the sample in the sample set includes sample facial image and markup information, the mark letter Cease face, head and the trunk for being included for marking the sample facial image;
Using the sample set, following training step is executed to current face's detection model:
The sample set is inputted into current face's detection model, obtains the facial feature information, head of this facial image of various kinds Portion's characteristic information and trunk characteristic information;
The facial feature information, head feature information and trunk characteristic information and the face in the markup information are determined respectively Face feature penalty values, head feature penalty values and trunk characteristic loss value between mark, head mark and trunk mark;
Weighted sum based on the face feature penalty values, head feature penalty values and trunk characteristic loss value, determines total losses Value;
By total losses value backpropagation in current face's detection model, to update the ginseng of current face's detection model Number, obtains updated Face datection model;
It is less than preset penalty values threshold value in response to the corresponding total losses value of the updated Face datection model, it will The updated Face datection model is determined as generated Face datection model.
2. described to determine the facial feature information, head feature information respectively according to the method described in claim 1, wherein Face feature loss between face's mark, head mark and the trunk mark in trunk characteristic information and the markup information Value, head feature penalty values and trunk characteristic loss value, including:
Determine that the facial feature information, head feature information and trunk characteristic information are corresponding in various kinds this facial image Face location, head position and trunk position separately include the probability of face, head and trunk;
Determine that the markup information of the face location, head position and trunk position and the sample facial image is marked respectively Face, the deviation between head and trunk;
Based on identified probability and deviation, the facial feature information, head feature information and trunk feature letter are determined respectively Face's mark, head mark in breath and the markup information and the face feature penalty values between trunk mark, head feature Penalty values and trunk characteristic loss value.
3. according to the method described in claim 1, wherein, when based on the face feature penalty values, head feature penalty values and The weighted sum of trunk characteristic loss value, when determining total losses value, the weight of the face feature penalty values is more than head spy The weight of penalty values is levied, the weight of the head feature penalty values is more than the weight of the trunk characteristic loss value.
4. according to the method described in claim 1, wherein, the training step further includes:
It is not less than preset penalty values threshold value in response to the corresponding total losses value of the updated Face datection model, Using the updated Face datection model as current face's detection model, and continue to execute the training step.
5. a kind of method for detecting human face, including:
Obtain target facial image;
By in target facial image input Face datection model trained in advance, facial feature information is obtained;Wherein, in advance The Face datection model that the method that trained Face datection model is any one of claim 1-4 generates.
6. a kind of device for generating Face datection model, including:
First acquisition unit, be configured to obtain Initial Face detection model, using acquired Initial Face detection model as Current face's detection model;
Second acquisition unit is configured to obtain sample set, wherein the sample in the sample set include sample facial image with And markup information, the markup information is for marking face, head and trunk that the sample facial image is included;
Training unit, the training unit include:
Subelement is inputted, is configured to the sample set inputting current face's detection model, obtains various kinds this face figure Facial feature information, head feature information and the trunk characteristic information of picture;
First determination subelement is configured to determine the facial feature information, head feature information and trunk feature letter respectively Face's mark, head mark in breath and the markup information and the face feature penalty values between trunk mark, head feature Penalty values and trunk characteristic loss value;
Second determination subelement is configured to based on the face feature penalty values, head feature penalty values and trunk feature damage The weighted sum of mistake value determines total losses value;
Subelement is updated, is configured to total losses value backpropagation in current face's detection model, with update The parameter of current face's detection model obtains updated Face datection model;
Subelement is generated, is configured to be less than in response to the corresponding total losses value of the updated Face datection model default Penalty values threshold value, the updated Face datection model is determined as generated Face datection model.
7. device according to claim 6, wherein first determination subelement, including:
Probability determination module is configured to determine the facial feature information, head feature information and trunk characteristic information each Corresponding face location, head position and trunk position separately include the general of face, head and trunk in sample facial image Rate;
Deviation determining module is configured to determine the face location, head position and trunk position and the sample people respectively Deviation between face, head and trunk that the markup information of face image is marked;
Determining module is lost, is configured to based on identified probability and deviation, determines the facial feature information, head respectively Face's mark, head mark in characteristic information and trunk characteristic information and the markup information and the face between trunk mark Characteristic loss value, head feature penalty values and trunk characteristic loss value.
8. device according to claim 6, wherein when based on the face feature penalty values, head feature penalty values and The weighted sum of trunk characteristic loss value, when determining total losses value, the weight of the face feature penalty values is more than head spy The weight of penalty values is levied, the weight of the head feature penalty values is more than the weight of the trunk characteristic loss value.
9. device according to claim 6, wherein the training unit further includes:
Model modification subelement is configured to not small in response to the corresponding total losses value of the updated Face datection model In preset penalty values threshold value, using the updated Face datection model as current face's detection model, and forefathers will be worked as Face detection model inputs the training unit.
10. a kind of human face detection device, including:
Target Acquisition unit is configured to obtain target facial image;
Feature acquiring unit is configured to, by target facial image input Face datection model trained in advance, obtain Facial feature information;Wherein, Face datection model trained in advance is any one of claim 6-9 for generating face The Face datection model that the device of detection model generates.
11. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more programs are executed by one or more processors so that one or more processors realize such as claim Any method in 1-4.
12. a kind of computer readable storage medium, is stored thereon with computer program, wherein when the program is executed by processor It realizes such as method any in claim 1-4.
13. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more programs are executed by one or more processors so that one or more processors realize such as claim 5 method.
14. a kind of computer readable storage medium, is stored thereon with computer program, wherein when the program is executed by processor Realize method as claimed in claim 5.
CN201810315220.1A 2018-04-08 2018-04-08 Method, method for detecting human face and device for generating Face datection model Pending CN108509929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810315220.1A CN108509929A (en) 2018-04-08 2018-04-08 Method, method for detecting human face and device for generating Face datection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810315220.1A CN108509929A (en) 2018-04-08 2018-04-08 Method, method for detecting human face and device for generating Face datection model

Publications (1)

Publication Number Publication Date
CN108509929A true CN108509929A (en) 2018-09-07

Family

ID=63381222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810315220.1A Pending CN108509929A (en) 2018-04-08 2018-04-08 Method, method for detecting human face and device for generating Face datection model

Country Status (1)

Country Link
CN (1) CN108509929A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492128A (en) * 2018-10-30 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN111563541A (en) * 2020-04-21 2020-08-21 北京百度网讯科技有限公司 Training method and device of image detection model
CN111814553A (en) * 2020-06-08 2020-10-23 浙江大华技术股份有限公司 Face detection method, model training method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9373057B1 (en) * 2013-11-01 2016-06-21 Google Inc. Training a neural network to detect objects in images
CN107220618A (en) * 2017-05-25 2017-09-29 中国科学院自动化研究所 Method for detecting human face and device, computer-readable recording medium, equipment
CN107644208A (en) * 2017-09-21 2018-01-30 百度在线网络技术(北京)有限公司 Method for detecting human face and device
CN107729848A (en) * 2017-10-20 2018-02-23 北京大学 Method for checking object and device
CN107833209A (en) * 2017-10-27 2018-03-23 浙江大华技术股份有限公司 A kind of x-ray image detection method, device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9373057B1 (en) * 2013-11-01 2016-06-21 Google Inc. Training a neural network to detect objects in images
CN107220618A (en) * 2017-05-25 2017-09-29 中国科学院自动化研究所 Method for detecting human face and device, computer-readable recording medium, equipment
CN107644208A (en) * 2017-09-21 2018-01-30 百度在线网络技术(北京)有限公司 Method for detecting human face and device
CN107729848A (en) * 2017-10-20 2018-02-23 北京大学 Method for checking object and device
CN107833209A (en) * 2017-10-27 2018-03-23 浙江大华技术股份有限公司 A kind of x-ray image detection method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEI LIU等: "SSD: Single Shot MultiBox Detector", 《ECCV 2016:COMPUTER VISION-ECCV 2016》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492128A (en) * 2018-10-30 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN109492128B (en) * 2018-10-30 2020-01-21 北京字节跳动网络技术有限公司 Method and apparatus for generating a model
CN111563541A (en) * 2020-04-21 2020-08-21 北京百度网讯科技有限公司 Training method and device of image detection model
CN111563541B (en) * 2020-04-21 2023-04-18 北京百度网讯科技有限公司 Training method and device of image detection model
CN111814553A (en) * 2020-06-08 2020-10-23 浙江大华技术股份有限公司 Face detection method, model training method and related device
CN111814553B (en) * 2020-06-08 2023-07-11 浙江大华技术股份有限公司 Face detection method, training method of model and related devices thereof

Similar Documents

Publication Publication Date Title
CN108038469B (en) Method and apparatus for detecting human body
CN108898185A (en) Method and apparatus for generating image recognition model
CN108427941A (en) Method, method for detecting human face and device for generating Face datection model
CN108038880A (en) Method and apparatus for handling image
CN109117358A (en) test method and test device for electronic equipment
CN107590482A (en) information generating method and device
CN109858445A (en) Method and apparatus for generating model
CN108898186A (en) Method and apparatus for extracting image
CN108197618A (en) For generating the method and apparatus of Face datection model
CN108171203A (en) For identifying the method and apparatus of vehicle
CN109389072A (en) Data processing method and device
CN107845016A (en) information output method and device
CN108509929A (en) Method, method for detecting human face and device for generating Face datection model
CN108062544A (en) For the method and apparatus of face In vivo detection
CN108509916A (en) Method and apparatus for generating image
CN110070076A (en) Method and apparatus for choosing trained sample
CN109242801A (en) Image processing method and device
CN109634767A (en) Method and apparatus for detection information
CN109086780A (en) Method and apparatus for detecting electrode piece burr
CN109241934A (en) Method and apparatus for generating information
CN108509921A (en) Method and apparatus for generating information
CN109214501A (en) The method and apparatus of information for identification
CN108171208A (en) Information acquisition method and device
CN108399401A (en) Method and apparatus for detecting facial image
CN108133197A (en) For generating the method and apparatus of information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180907

RJ01 Rejection of invention patent application after publication