CN110069959A - A kind of method for detecting human face, device and user equipment - Google Patents

A kind of method for detecting human face, device and user equipment Download PDF

Info

Publication number
CN110069959A
CN110069959A CN201810058182.6A CN201810058182A CN110069959A CN 110069959 A CN110069959 A CN 110069959A CN 201810058182 A CN201810058182 A CN 201810058182A CN 110069959 A CN110069959 A CN 110069959A
Authority
CN
China
Prior art keywords
image
model
convolutional neural
target
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810058182.6A
Other languages
Chinese (zh)
Inventor
田卉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201810058182.6A priority Critical patent/CN110069959A/en
Publication of CN110069959A publication Critical patent/CN110069959A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention provides a kind of method for detecting human face, device and user equipment, is related to field of communication technology.This method comprises: obtaining target image;The position of face in the target image and characteristic point are detected using multistage concatenated convolutional neural network model, obtain face information.The solution of the present invention handles image by multistage concatenated convolutional neural network model, effectively increases the accuracy of Face datection.

Description

A kind of method for detecting human face, device and user equipment
Technical field
The present invention relates to field of communication technology, a kind of method for detecting human face, device and user equipment are particularly related to.
Background technique
Face recognition technology is the face feature based on people, and the image or video to input carry out Face datection, key Point location, feature extraction and comparison, to identify face identity.Face datection is to detect face from picture or video flowing Position size, key point positioning determine facial face key feature on this basis, and feature extraction carries out the face detected Feature description and extraction, comparison is that the facial image feature of extraction is compared with the feature in object library, rules out comparison As a result, the identity of identification people.Dynamic human face identification is mainly detected and is identified automatically the person to the face in dynamic video The process of part.Wherein, common detection algorithm is cascade Face datection algorithm VJ-detector.VJ-detector is broadly divided into Four-stage: Lis Hartel sign is extracted, creates characteristic pattern, Adaboost repetitive exercise and cascade classifier.
Although coping with, real-life complexity is more however, VJ-detector is levied using more Lis Hartels Become, the picture effect of posture multiplicity declines significantly.In real life, light variation can significantly influence the dark area of face The numerical value in domain and white area, the characteristic value that Lis Hartel sign extracts will be greatly affected;Side face and positive face texture Feature differs greatly, and attitudes vibration can equally influence the extraction that Lis Hartel is levied huge;Equally, the textural characteristics phase of different expressions Difference is huge, and the expression of exaggeration also will greatly affect the extraction of VJ detector Haar feature, for the accuracy shadow of Face datection Sound is huge.
Summary of the invention
The object of the present invention is to provide a kind of method for detecting human face, device and user equipmenies, pass through multistage concatenated convolutional mind Image is handled through network model, effectively improves the accuracy of Face datection.
In order to achieve the above objectives, the embodiment of the present invention provides a kind of method for detecting human face, comprising:
Obtain target image;
The position of face in the target image and characteristic point are examined using multistage concatenated convolutional neural network model It surveys, obtains face information.
Wherein, every rank convolutional neural networks model in the multistage concatenated convolutional neural network model includes face Detectability loss function model, candidate frame correction loss function distance model and face feature point loss function model;
It is described that the position of face in the target image and feature are clicked through using multistage concatenated convolutional neural network model The step of row detection, acquisition face information, comprising:
The target sample data of corresponding current convolutional neural networks model are obtained in sample database;
According to the target sample data, the target image of the corresponding current convolutional neural networks model input of prediction Facial image frame;
Loss function is corrected according to the Face datection loss function model of the current convolutional neural networks model, candidate frame Distance model and face feature point loss function model, the determining similarity with the facial image frame are greater than first threshold Target frame;
According to the target frame and the target image, the face information in the target image is obtained.
Wherein, the Face datection loss function model is Lossi det=-(yi detlog(pi))+(1-yi det)(1-log (pi)), wherein piIt is the probability of the facial image frame, y for i-th of candidate framei det∈ { 0,1 } indicates i-th of candidate frame It whether is target candidate frame, Lossi detFor face Detectability loss function model calculated value.
Wherein, the candidate frame correction loss function distance model isWherein, yi box' For the preset reference amount of i-th of target candidate frame, yi boxFor the preset reference amount of the facial image frame, Lossi boxFor Candidate frame corrects loss function distance model calculated value.
Wherein, the face feature point loss function model isWherein, yi landmark' it is the face feature point position that i-th of target candidate frame selects image in the target image center, yi landmarkThe face feature point position of image, Loss are selected for the facial image frame framei landmarkFor face feature point damage Lose function model calculated value.
Wherein, described to be rectified according to Face datection loss function model, the candidate frame of the current convolutional neural networks model Positive loss function distance model and face feature point loss function model, the determining similarity with the facial image frame are greater than The step of target frame of first threshold, comprising:
Based on the target sample data, determine that the candidate frame of the current convolutional neural networks model is the face figure As the probability of frame;
The probability is substituted into the Face datection loss function model, Loss is obtainedi detY when minimumi detValue, If yi det=0, then i-th of candidate frame is not target candidate frame;If yi det=1, then i-th of candidate frame is that target is waited Select frame;
The preset reference amount of each target candidate frame is updated in the candidate frame correction loss function distance model, often The face feature point position of a target candidate frame is updated in the face feature point loss function model, determines Lossi boxIt is small In second threshold and Lossi landmarkIt is the target frame less than the target candidate frame of third threshold value.
Wherein, the multistage concatenated convolutional neural network model includes: the first rank convolutional neural networks model, second-order volume Product neural network model and third rank convolutional neural networks model;Wherein,
The second-order convolutional neural networks model increases 1 pond layer than the first rank convolutional neural networks model With 1 full articulamentum, the third rank convolutional neural networks model increases 1 than the second-order convolutional neural networks model Convolutional layer and 1 pond layer.
Wherein, the step of acquisition target image, comprising:
Image to be detected is zoomed in and out, obtains that there is various sizes of first image, the second image and third image, and The size of the first image, second image and the third image is sequentially increased;Wherein,
The first image is the target image of the first rank convolutional neural networks model, and second image is described The target image of second-order convolutional neural networks model, the third image are the mesh of the third rank convolutional neural networks model Logo image.
Wherein, the step of the target sample data that corresponding current convolutional neural networks model is obtained in sample database Suddenly, comprising:
Using the candidate frame in the current convolutional neural networks model, preset quantity is acquired from the sample database First kind sample, the second class sample, third class sample and the 4th class sample as the target sample data;Wherein,
The first kind sample is sample of the ratio less than the first ratio of the total image-region of facial image region Zhan, described Second class sample is that the ratio of the total image-region of facial image region Zhan is greater than the sample of the second ratio, and the third class sample is The ratio of the total image-region of facial image region Zhan is greater than or equal to the first ratio, and is less than or equal to the sample of the second ratio, The 4th class sample is the sample for including facial image and face feature point.
Wherein, the method also includes:
During based on the target sample data to the current convolutional neural networks model training, obtain each Sequence after iterative processing according to penalty values from big to small, the input that the N number of sample being arranged in front is handled as next iteration Sample, until the penalty values are less than default loss threshold value.
In order to achieve the above objectives, the embodiment of the present invention provides a kind of human face detection device, comprising:
Module is obtained, for obtaining target image;
First processing module, for the position using multistage concatenated convolutional neural network model to face in the target image It sets and is detected with characteristic point, obtain face information.
In order to achieve the above objectives, the embodiment of the present invention provides a kind of user equipment, including transceiver, memory, processing Device and it is stored in the computer program that can be run on the memory and on the processor;The processor executes the meter Method for detecting human face as described above is realized when calculation machine program.
In order to achieve the above objectives, the embodiment of the present invention provides a kind of computer readable storage medium, is stored thereon with meter Calculation machine program, the computer program realize the step in method for detecting human face as described above when being executed by processor.
The advantageous effects of the above technical solutions of the present invention are as follows:
The method for detecting human face of the embodiment of the present invention will use multistage cascade after getting target image to be processed Convolutional neural networks model detects the position of face in the target image and characteristic point, in network model stepwise by The thick precision and efficiency obtained to the progressive realization face information of essence, promotes the accuracy of detection.
Detailed description of the invention
Fig. 1 is one of the flow chart of method for detecting human face of the embodiment of the present invention;
Fig. 2 is the two of the flow chart of the method for detecting human face of the embodiment of the present invention;
Fig. 3 is that the method for detecting human face of the embodiment of the present invention joins the application signal of convolutional neural networks model in three classes Figure;
Fig. 4 is the structural schematic diagram of the human face detection device of the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of the user equipment of the embodiment of the present invention.
Specific embodiment
To keep the technical problem to be solved in the present invention, technical solution and advantage clearer, below in conjunction with attached drawing and tool Body embodiment is described in detail.
The present invention provides one aiming at the problem that existing Face datection easily reduction accuracy in detection affected by many factors Various method for detecting human face are handled image by multistage concatenated convolutional neural network model, effectively increase face inspection The accuracy of survey.
As shown in Figure 1, a kind of method for detecting human face of the embodiment of the present invention, comprising:
Step 101, target image is obtained;
Step 102, the position using multistage concatenated convolutional neural network model to face in the target image and feature Point is detected, and face information is obtained.
Through the above steps, the method for detecting human face of the embodiment of the present invention will after getting target image to be processed Using multistage concatenated convolutional neural network model the position of face in the target image and characteristic point are detected, stepwise By slightly to the progressive precision and efficiency realizing face information and obtaining of essence, promoting the accuracy of detection in network model.
In the embodiment, convolutional neural networks model is mainly by convolutional layer, pond layer, activation primitive, full articulamentum, loss Function composition, wherein what is played a decisive role is the convolutional layer and full articulamentum for possessing weight parameter.It should be appreciated that connecting entirely The sparse of layer and convolutional layer is connect to link and weight shares different, each neuron of full articulamentum and upper one layer of all minds It is connected through member.But full articulamentum and convolutional layer are all mainly made of weight parameter w and offset parameter b.For convolutional layer or Full articulamentum is exported when input is x as y=wx+b.Full articulamentum might as well be regarded as a kind of convolution without sliding ability Layer, one input size be (6,6) full articulamentum be equal to a convolution kernel (Kernel) be 6, sliding step (Stride) It is also 0 convolutional layer for 0, Boundary filling (pad).Full articulamentum can be turned by changing the parameter spread pattern of full articulamentum Turn to convolutional layer.
CaffeNet (Convolutional Architecture for Fast Feature Embedding Net, volume Product neural network framework) in, weight matrix four dimensions are number of filter, the characteristic pattern number of plies, convolution kernel length and width respectively.It is right In CaffeNet, the last one convolutional layer conv5 is one (1,256,6,6) by the output of a pond layer pool5, that is, is had 256 characteristic patterns, each characteristic pattern are 6 × 6 sizes.Connect the full connection that this feature seeks for one (1,4096,1,1) Layer, can be converted by the following method: design one possesses 4096 groups of filters, every group of filter has 256 filtering cores, Each filtering core replaces original full articulamentum Fc6 by 6 × 6 convolutional layer Conv6.6 × 6 characteristic pattern passes through convolutional layer Conv6, it will obtain the characteristic pattern that a size is (Isosorbide-5-Nitrae 096,1,1).Due to the power of convolutional layer Conv6 and full articulamentum Fc6 Weight parameter w is identical with offset parameter b, therefore output phase is same.Equally, full articulamentum Fc7 and Fc8 is changed according to above method At convolutional layer Conv7 and Conv8.The picture that one Zhang great little is 381*451 generates the feature that size is 5*8 after Caffenet Figure, after characteristic pattern is using softmax classification, the probability extreme higher position cat class (label=281 in Imagenet) corresponds to frame constituency Domain.In this way when carrying out object detection, only needs to carry out a feedforward network for the picture of arbitrary dimension and propagate Obtain the characteristic pattern of all positions.Characteristic pattern by Softmax classify it is available each classification shot chart Score Map.Select the position of current class Score Map highest scoring correspond to original image position be exactly current type objects position.
Preferably, in embodiments of the present invention, the multistage concatenated convolutional neural network model includes: the first rank convolution mind Through network model, second-order convolutional neural networks model and third rank convolutional neural networks model;Wherein,
The second-order convolutional neural networks model increases 1 pond layer than the first rank convolutional neural networks model With 1 full articulamentum, the third rank convolutional neural networks model increases 1 than the second-order convolutional neural networks model Convolutional layer and 1 pond layer.
Here, multistage concatenated convolutional neural network model by increasing the convolutional neural networks number of plies and convolution kernel stepwise Number, will be described in greater detail out the minutia of target image, further increases the precision and efficiency of detection.
For example, the first rank convolutional neural networks model 12net is as shown in table 1 below:
Table 1
Second-order convolutional neural networks model 24net is as shown in table 2 below:
Table 2
Third rank convolutional neural networks model 48net is as shown in table 3 below:
Table 3
In this way, in view of the composition difference of each rank convolutional neural networks model in the embodiment, it will be for every rank convolution mind Image to be detected is zoomed in and out through network model to obtain its input picture for adapting to size.Step 101 includes:
Image to be detected is zoomed in and out, obtains that there is various sizes of first image, the second image and third image, and The size of the first image, second image and the third image is sequentially increased;Wherein,
The first image is the target image of the first rank convolutional neural networks model, and second image is described The target image of second-order convolutional neural networks model, the third image are the mesh of the third rank convolutional neural networks model Logo image.
Here, which can be image obtained by camera current shooting, directly be carried out after the picture is taken with realizing Face datection;Or the image to be detected is user's selected image in photograph album, to realize the people for image selected by user Face detection.And the image to be detected can also carry out the image preprocessings such as mean value, input each rank convolutional neural networks to be promoted The quality of the image of model.
Later, its correspondence image is handled in each rank convolutional neural networks model.Specifically, the multistage cascade Every rank convolutional neural networks model in convolutional neural networks model include Face datection loss function model, candidate frame rectify Positive loss function distance model and face feature point loss function model;
As shown in Fig. 2, step 102, comprising:
Step 201, the target sample data of corresponding current convolutional neural networks model are obtained in sample database;
Step 202, according to the target sample data, the corresponding current convolutional neural networks model of prediction is inputted The facial image frame of target image;
Step 203, it is corrected according to the Face datection loss function model of the current convolutional neural networks model, candidate frame Loss function distance model and face feature point loss function model, the determining similarity with the facial image frame are greater than the The target frame of one threshold value;
Step 204, according to the target frame and the target image, the face information in the target image is obtained.
Here, each rank convolutional neural networks model includes Face datection loss function model, candidate frame correction loss Function distance model and face feature point loss function model will obtain in sample database first in treatment process stepwise Take the target sample data of corresponding current convolutional neural networks model, to promote the validity of sample data, then utilize the mesh The facial image frame of the target image of the corresponding current convolutional neural networks model input of standard specimen notebook data prediction, later by working as Face datection loss function model, candidate frame correction loss function distance model and facial characteristics in preceding convolutional neural networks model Point loss function model determines the target frame for being greater than first threshold with the similarity of the face image frame of prediction, final to combine The target frame and target image obtain the face information in target image.
With above-mentioned including the first rank convolutional neural networks model, second-order convolutional neural networks model and third rank convolution For the three classes connection convolutional neural networks model of neural network model, as shown in Figure 3:
First rank convolutional neural networks model 12net, obtains corresponding first object sample data in sample database A1 predicts the first facial image frame for inputting the first image of the first rank convolutional neural networks model, Zhi Houyou using A1 Face datection loss function model, candidate frame correction loss function distance model and the face of the first rank convolutional neural networks model Portion's characteristic point loss function model obtains the first object side for being greater than first threshold with the similarity of the first facial image frame Frame.Here, first object frame be by non-maxima suppression, based on the first facial image frame to it is being randomly generated, be used for The candidate frame merging for obtaining high superposed in the candidate frame of human face region is resulting, finally using the first object frame first Image center choosing, it will be able to which the image for obtaining screening out a large amount of non-face regions obtains the first face information.
Second-order convolutional neural networks model 24net, obtains corresponding second target sample data in sample database A2 predicts the second facial image frame for inputting the second image of the second-order convolutional neural networks model, Zhi Houyou using A2 Face datection loss function model, candidate frame correction loss function distance model and the face of the second-order convolutional neural networks model Portion's characteristic point loss function model obtains the second target side for being greater than first threshold with the similarity of the second facial image frame Frame.Likewise, by non-maxima suppression, because the second-order convolutional neural networks model is than the first rank convolutional neural networks model Possess bigger receptive field, more network numbers of plies and more complicated network structure, resulting second target frame can be On the basis of its candidate frame, that is, first object frame, preferably goes out opposite first optical sieving and fall more non-face regions, second Also the second face information compared with the first face Advance data quality will be extracted on image.
Third rank convolutional neural networks model 48net, obtains corresponding third target sample data in sample database A3 predicts the third facial image frame for inputting the third image of the third rank convolutional neural networks model, Zhi Houyou using A3 Face datection loss function model, candidate frame correction loss function distance model and the face of the third rank convolutional neural networks model Portion's characteristic point loss function model obtains the third target side for being greater than first threshold with the similarity of the third facial image frame Frame.After third rank convolutional neural networks model passes through non-maxima suppression, third target frame is in its candidate frame i.e. the second target On the basis of frame, opposite second image can further filter out more non-face regions, be extracted on third image compared with the The third face information that two face informations advanced optimize realizes the non-face area gradually screened out in image to be detected step by step The purpose in domain finally obtains more fine face information.
Further specifically, the Face datection loss function model is Lossi det=-(yi detlog(pi))+(1- yi det)(1-log(pi)), wherein piIt is the probability of the facial image frame, y for i-th of candidate framei det∈ { 0,1 } is indicated Whether i-th of candidate frame is target candidate frame, Lossi detFor face Detectability loss function model calculated value.
Here, determining that i-th of candidate frame is the Probability p of facial image frameiAfterwards, pass through Face datection loss function mould Type Lossi det=-(yi detlog(pi))+(1-yi det)(1-log(pi)), it obtains making Lossi detY when minimumi detValue after, Can recognize whether i-th of candidate frame is target candidate frame, so that filtering out in a large amount of candidate frames includes face Target candidate frame.
Wherein, the candidate frame correction loss function distance model isWherein, yi box' For the preset reference amount of i-th of target candidate frame, yi boxFor the preset reference amount of the facial image frame, Lossi boxFor Candidate frame corrects loss function distance model calculated value.
Here, in identified target candidate frame, by the preset reference amount of its preset reference amount and facial image frame It is updated to formula respectivelyIn, the candidate frame correction of i-th of target candidate frame can be calculated Loss function distance model calculated value Lossi box.In the embodiment, preferred preset reference amount are as follows: the upper left angular coordinate of frame X, the long L of Y and frame, width W.Correspondingly, corresponding each preset reference amount, will obtain 4 candidate frames correction loss functions away from From model calculation value Lossi box
Wherein, the face feature point loss function model isWherein, yi landmark' it is the face feature point position that i-th of target candidate frame selects image in the target image center, yi landmarkThe face feature point position of image, Loss are selected for the facial image frame framei landmarkFor face feature point damage Lose function model calculated value.
Here, by identified i-th of target candidate frame, its frame is selected to the face feature point position y of imagei landmark' And facial image frame frame selects the face feature point position y of imagei landmarkIt is updated to face feature point loss function modelThe face feature point loss function of i-th of target candidate frame can be calculated Model calculation value Lossi landmark
Therefore step 203 includes:
Based on the target sample data, determine that the candidate frame of the current convolutional neural networks model is the face figure As the probability of frame;
The probability is substituted into the Face datection loss function model, Loss is obtainedi detY when minimumi detValue, If yi det=0, then i-th of candidate frame is not target candidate frame;If yi det=1, then i-th of candidate frame is that target is waited Select frame;
The preset reference amount of each target candidate frame is updated in the candidate frame correction loss function distance model, often The face feature point position of a target candidate frame is updated in the face feature point loss function model, determines Lossi boxIt is small In second threshold and Lossi landmarkIt is the target frame less than the target candidate frame of third threshold value.
In this way, by by the target sample data obtained, determining that i-th of candidate frame is the general of facial image frame first Rate pi, to substitute into Face datection loss function model Lossi det=-(yi detlog(pi))+(1-yi det)(1-log(pi)), it obtains To making Lossi detY when minimumi detValue, it is thus understood that whether i-th of candidate frame is target candidate frame, is sieved in a large amount of candidate frames Select include face target candidate frame.Then, by the way that resulting candidate frame is corrected loss function distance model calculated value Lossi boxCompared with corresponding second threshold (pre-determined distance threshold value), resulting face feature point calculated value Lossi landmarkWith Corresponding third threshold value (default face feature point threshold value) is compared, and just by the target candidate frame of primary election, passes through Lossi boxIt is small In second threshold and Lossi landmarkLess than the target candidate frame further screening target frame of third threshold value.It wherein, is excellent Optimal target frame is selected, pre-determined distance threshold value and default face feature point threshold value go to zero.
In addition, on the basis of the above embodiments, step 201 includes: in the embodiment of the present invention
Using the candidate frame in the current convolutional neural networks model, preset quantity is acquired from the sample database First kind sample, the second class sample and third class sample as the target sample data;Wherein,
The first kind sample is sample of the ratio less than the first ratio of the total image-region of facial image region Zhan, described Second class sample is that the ratio of the total image-region of facial image region Zhan is greater than the sample of the second ratio, and the third class sample is The ratio of the total image-region of facial image region Zhan is greater than or equal to the first ratio, and is less than or equal to the sample of the second ratio, The 4th class sample is the sample for including facial image and face feature point.
It is assumed that the first ratio is 0.3, the second ratio is 0.7, will pass through formula in sample databaseSample data by IoU less than 0.3 is divided into negative sample (first kind sample), and IoU is greater than 0.7 sample Notebook data is divided into positive sample (the second class sample), and the sample data of IoU ∈ [0.3,0.7] is divided into neutral sample (third class Sample), and include that the samples of human face region and 5 characteristic point positions is characterized point location sample (the 4th class sample).Wherein, Sbox1Indicate the human face region data in sample, Sbox2Indicate the non-face area data in sample.Preferably, Face datection damages Losing function model will be trained by positive sample and negative sample, and candidate frame correction loss function distance model will pass through positive sample With neutral sample training, face feature point loss function model is then only by positioning feature point sample training, in this way, rolling up in each rank Product neural network model in, just will use candidate frame therein, in sample database acquire preset quantity first kind sample, Second class sample, third class sample and the 4th class sample are as target sample data.
By taking above-mentioned three classes join convolutional neural networks model as an example, the first rank convolutional neural networks model is from sample data In library (such as Wider Face database), positive sample, negative sample, neutral sample are acquired by the candidate frame generated at random, from In sample database (such as CelebA database), human face region is reduced as positioning feature point sample.Second-order convolutional Neural net The candidate frame of network model is the first object frame of the first rank convolutional neural networks model, from sample database (such as Wider Face database) in, positive sample, negative sample, neutral sample are acquired by first object frame, (such as from sample database CelebA database) in, human face region is reduced as positioning feature point sample.The candidate frame of third rank convolutional neural networks model Lead to from sample database (such as Wider Face database) for the second target frame of second-order convolutional neural networks model The second target frame is crossed to acquire positive sample, negative sample, neutral sample and cut out from sample database (such as CelebA database) Subtract human face region as positioning feature point sample.
Letter is lost for Face datection loss function model, candidate frame correction loss function distance model and face feature point A certain item training in exponential model, other models may not be used.For example, only using negative sample training face Detectability loss Other models are set as 0, then formula are as follows: Loss=α by function modeldetlossdet(θ)+αboxlossbox(θ)+αlandmarklosslandmark(θ).After training, the first rank convolutional neural networks model and second-order convolutional neural networks model In: αdet=1, αbox=0.5, αlandmark=0.5;α in third rank convolutional neural networks modeldet=1, αbox=0.5, αlandmark=1.
It should also be appreciated that a large amount of sample determines the accuracy and convergence of depth network, training sample includes It is easy sample easy example, difficulty sample hard example.Easy example refers to the sample easily identified, to training Model contribution is smaller;Hard example refers to nondescript sample, is easy the sample of misclassification class.Training sample data collection is usual Difficult sample is selected to enable to training more effective including more easy sample, less difficult sample, when training.Sample Excavation is usually the difficult sample found out in sample, to improve network to the discriminating power of target.So the embodiment of the present invention In, the method also includes:
During based on the target sample data to the current convolutional neural networks model training, obtain each Sequence after iterative processing according to penalty values from big to small, the input that the N number of sample being arranged in front is handled as next iteration Sample, until the penalty values are less than default loss threshold value.
Penalty values Loss can pass through formula Loss=αdetlossdet(θ)+αboxlossbox(θ)+αlandmarklosslandmark (θ) is calculated.During current convolutional neural networks model training, the penalty values after calculating each iterative processing, energy Enough choose the input sample that the maximum top n sample of penalty values is handled as next iteration.Such as SGD (Stochastic Gradient Descent, online stochastic gradient descent) wherein an iteration batch processing (uses Batch Size=128 all pictures in) carry out propagated forward and obtain corresponding loss function size, Loss is sorted from large to small. Loss maximum preceding 70% is selected to carry out backpropagation.By above-mentioned after line difficulty samples selection, class joins convolutional Neural The Face datection of network model will possess faster convergence rate, and also further be promoted in precision.
In conclusion the method for detecting human face of the embodiment of the present invention will use after getting target image to be processed Multistage concatenated convolutional neural network model detects the position of face in the target image and characteristic point, in network stepwise By slightly to the progressive precision and efficiency realizing face information and obtaining of essence, promoting the accuracy of detection in model.
As shown in figure 4, a kind of human face detection device 400 of the embodiment of the present invention, comprising:
Module 410 is obtained, for obtaining target image;
First processing module 420, for using multistage concatenated convolutional neural network model to face in the target image Position and characteristic point detected, obtain face information.
Wherein, every rank convolutional neural networks model in the multistage concatenated convolutional neural network model includes face Detectability loss function model, candidate frame correction loss function distance model and face feature point loss function model;
The first processing module includes:
Acquisition submodule, for obtaining the target sample number of corresponding current convolutional neural networks model in sample database According to;
Submodule is predicted, for according to the target sample data, the corresponding current convolutional neural networks model of prediction The facial image frame of the target image inputted;
Submodule is determined, for the Face datection loss function model according to the current convolutional neural networks model, time Frame correction loss function distance model and face feature point loss function model are selected, determination is similar to the facial image frame Degree is greater than the target frame of first threshold;
Submodule is handled, for obtaining the people in the target image according to the target frame and the target image Face information.
Wherein, the Face datection loss function model is Lossi det=-(yi detlog(pi))+(1-yi det)(1-log (pi)), wherein piIt is the probability of the facial image frame, y for i-th of candidate framei det∈ { 0,1 } indicates i-th of candidate frame It whether is target candidate frame, Lossi detFor face Detectability loss function model calculated value.
Wherein, the candidate frame correction loss function distance model isWherein, yi box' For the preset reference amount of i-th of target candidate frame, yi boxFor the preset reference amount of the facial image frame, Lossi boxFor Candidate frame corrects loss function distance model calculated value.
Wherein, the face feature point loss function model isWherein, yi landmark' it is the face feature point position that i-th of target candidate frame selects image in the target image center, yi landmarkThe face feature point position of image, Loss are selected for the facial image frame framei landmarkFor face feature point damage Lose function model calculated value.
Wherein, the determining submodule includes:
First determination unit determines the current convolutional neural networks model for being based on the target sample data Candidate frame is the probability of the facial image frame;
Processing unit obtains Loss for substituting into the probability in the Face datection loss function modeli detIt is minimum When yi detValue, if yi det=0, then i-th of candidate frame is not target candidate frame;If yi det=1, then it waits for described i-th Selecting frame is target candidate frame;
Second determination unit, for the preset reference amount of each target candidate frame to be updated to the candidate frame correction loss In function distance model, the face feature point position of each target candidate frame is updated to the face feature point loss function model In, determine Lossi boxLess than second threshold and Lossi landmarkIt is the target frame less than the target candidate frame of third threshold value.
Wherein, the multistage concatenated convolutional neural network model includes: the first rank convolutional neural networks model, second-order volume Product neural network model and third rank convolutional neural networks model;Wherein,
The second-order convolutional neural networks model increases 1 pond layer than the first rank convolutional neural networks model With 1 full articulamentum, the third rank convolutional neural networks model increases 1 than the second-order convolutional neural networks model Convolutional layer and 1 pond layer.
Wherein, the acquisition module is further used for:
Image to be detected is zoomed in and out, obtains that there is various sizes of first image, the second image and third image, and The size of the first image, second image and the third image is sequentially increased;Wherein,
The first image is the target image of the first rank convolutional neural networks model, and second image is described The target image of second-order convolutional neural networks model, the third image are the mesh of the third rank convolutional neural networks model Logo image.
Wherein, the acquisition submodule is further used for:
Using the candidate frame in the current convolutional neural networks model, preset quantity is acquired from the sample database First kind sample, the second class sample, third class sample and the 4th class sample as the target sample data;Wherein,
The first kind sample is sample of the ratio less than the first ratio of the total image-region of facial image region Zhan, described Second class sample is that the ratio of the total image-region of facial image region Zhan is greater than the sample of the second ratio, and the third class sample is The ratio of the total image-region of facial image region Zhan is greater than or equal to the first ratio, and is less than or equal to the sample of the second ratio, The 4th class sample is the sample for including facial image and face feature point.
Wherein, described device further include:
Second processing module, for being based on the target sample data to the current convolutional neural networks model training During, the sequence after each iterative processing according to penalty values from big to small is obtained, the N number of sample being arranged in front is as next The input sample of secondary iterative processing, until the penalty values are less than default loss threshold value.
The human face detection device of the embodiment will use multistage concatenated convolutional after getting target image to be processed Neural network model detects the position of face in the target image and characteristic point, in network model stepwise by slightly to The progressive precision and efficiency realizing face information and obtaining of essence, promotes the accuracy of detection.
It should be noted that the device is the device for applying above-mentioned method for detecting human face, above-mentioned method for detecting human face is real The implementation for applying example is suitable for the device, can also reach identical technical effect.
A kind of user equipment of the embodiment of the present invention, as shown in figure 5, including transceiver 510, memory 520, processor 500 and it is stored in the computer program that can be run on the memory 520 and on the processor 500;The processor 500 Above-mentioned method for detecting human face is realized when executing the computer program.
The transceiver 510, for sending and receiving data under control of the processor 500.
Wherein, in Fig. 5, bus architecture may include the bus and bridge of any number of interconnection, specifically by processor 500 The various circuits for the memory that the one or more processors and memory 520 of representative represent link together.Bus architecture is also Various other circuits of such as peripheral equipment, voltage-stablizer and management circuit or the like can be linked together, these are all It is it is known in the art, therefore, it will not be further described herein.Bus interface provides interface.Transceiver 510 can To be multiple element, that is, includes transmitter and receiver, the list for communicating over a transmission medium with various other devices is provided Member.For different user equipmenies, user interface 530, which can also be, external the interface for needing equipment is inscribed, and connection is set Standby including but not limited to keypad, display, loudspeaker, microphone, control stick etc..
Processor 500, which is responsible for management bus architecture and common processing, memory 520, can store processor 500 and is holding Used data when row operation.
A kind of computer readable storage medium of the embodiment of the present invention is stored thereon with computer program, the computer The step in method for detecting human face as described above is realized when program is executed by processor, and can reach identical technical effect, To avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc..
Explanation is needed further exist for, this user equipment described in this description includes but is not limited to smart phone, puts down Plate computer etc., and described many functional components are all referred to as module, specifically to emphasize the only of its implementation Vertical property.
In the embodiment of the present invention, module can use software realization, to be executed by various types of processors.Citing comes It says, the executable code module of a mark may include the one or more physics or logical block of computer instruction, citing For, object, process or function can be built as.Nevertheless, the executable code of institute's mark module is without physically It is located together, but may include the different instructions being stored in different positions, be combined together when in these command logics When, it constitutes module and realizes the regulation purpose of the module.
In fact, executable code module can be the either many item instructions of individual instructions, and can even be distributed It on multiple and different code segments, is distributed in distinct program, and is distributed across multiple memory devices.Similarly, it grasps Making data can be identified in module, and can realize according to any form appropriate and be organized in any appropriate class In the data structure of type.The operation data can be used as individual data collection and be collected, or can be distributed on different location (including in different storage device), and at least partly can only be present in system or network as electronic signal.
When module can use software realization, it is contemplated that the level of existing hardware technique, it is possible to implemented in software Module, without considering the cost, those skilled in the art can build corresponding hardware circuit to realize correspondence Function, the hardware circuit includes conventional ultra-large integrated (VLSI) circuit or gate array and such as logic core The existing semiconductor of piece, transistor etc either other discrete elements.Module can also use programmable hardware device, such as Field programmable gate array, programmable logic array, programmable logic device etc. are realized.
Above-mentioned exemplary embodiment is described with reference to those attached drawings, many different forms and embodiment be it is feasible and Without departing from spirit of that invention and teaching, therefore, the present invention should not be construed the limitation become in this proposed exemplary embodiment. More precisely, these exemplary embodiments are provided so that the present invention can be perfect and complete, and can be by the scope of the invention It is communicated to those those of skill in the art.In those schemas, size of components and relative size be perhaps based on it is clear for the sake of And it is exaggerated.Term used herein is based only on description particular example embodiment purpose, and being not intended to, which becomes limitation, uses.Such as Ground is used at this, unless the interior text clearly refers else, otherwise the singular " one ", "one" and "the" be intended to by Those multiple forms are also included in.Those term "comprising"s and/or " comprising " will become further apparent when being used in this specification, It indicates the presence of the feature, integer, step, operation, component and/or component, but is not excluded for one or more other features, whole Number, step, operation, component, component and/or the presence of its group or increase.Unless otherwise indicated, narrative tense, a value range packet Bound containing the range and any subrange therebetween.
The above is a preferred embodiment of the present invention, it is noted that for those skilled in the art For, without departing from the principles of the present invention, it can also make several improvements and retouch, these improvements and modifications It should be regarded as protection scope of the present invention.

Claims (13)

1. a kind of method for detecting human face characterized by comprising
Obtain target image;
The position of face in the target image and characteristic point are detected using multistage concatenated convolutional neural network model, obtained Obtain face information.
2. method for detecting human face according to claim 1, which is characterized in that the multistage concatenated convolutional neural network model In every rank convolutional neural networks model include Face datection loss function model, candidate frame correction loss function apart from mould Type and face feature point loss function model;
It is described that the position of face in the target image and characteristic point are examined using multistage concatenated convolutional neural network model The step of surveying, obtaining face information, comprising:
The target sample data of corresponding current convolutional neural networks model are obtained in sample database;
According to the target sample data, the people of the target image of the corresponding current convolutional neural networks model input of prediction Face image frame;
Loss function distance is corrected according to the Face datection loss function model of the current convolutional neural networks model, candidate frame Model and face feature point loss function model, the determining target for being greater than first threshold with the similarity of the facial image frame Frame;
According to the target frame and the target image, the face information in the target image is obtained.
3. method for detecting human face according to claim 2, which is characterized in that the Face datection loss function model is Lossi det=-(yi detlog(pi))+(1-yi det)(1-log(pi)), wherein piIt is the facial image for i-th of candidate frame The probability of frame, yi det∈ { 0,1 } indicates whether i-th of candidate frame is target candidate frame, Lossi detFor face Detectability loss letter Exponential model calculated value.
4. method for detecting human face according to claim 3, which is characterized in that the candidate frame correction loss function is apart from mould Type isWherein, yi box'For the preset reference amount of i-th of target candidate frame, yi boxFor institute State the preset reference amount of facial image frame, Lossi boxLoss function distance model calculated value is corrected for candidate frame.
5. method for detecting human face according to claim 4, which is characterized in that the face feature point loss function model isWherein, yi landmark' it is i-th of target candidate frame in the target figure As center selects the face feature point position of image, yi landmarkThe facial characteristics of image is selected for the facial image frame frame Point position, Lossi landmarkFor face feature point loss function model calculation value.
6. method for detecting human face according to claim 5, which is characterized in that described according to the current convolutional neural networks Face datection loss function model, candidate frame correction loss function distance model and the face feature point loss function mould of model The step of type, the determining similarity with the facial image frame is greater than the target frame of first threshold, comprising:
Based on the target sample data, determine that the candidate frame of the current convolutional neural networks model is the facial image side The probability of frame;
The probability is substituted into the Face datection loss function model, Loss is obtainedi detY when minimumi detValue, if yi det=0, then i-th of candidate frame is not target candidate frame;If yi det=1, then i-th of candidate frame is target candidate Frame;
The preset reference amount of each target candidate frame is updated in the candidate frame correction loss function distance model, each mesh The face feature point position of mark candidate frame is updated in the face feature point loss function model, determines Lossi boxLess than Two threshold values and Lossi landmarkIt is the target frame less than the target candidate frame of third threshold value.
7. method for detecting human face according to claim 2, which is characterized in that the multistage concatenated convolutional neural network model It include: the first rank convolutional neural networks model, second-order convolutional neural networks model and third rank convolutional neural networks model;Its In,
The second-order convolutional neural networks model increases 1 pond layer and 1 than the first rank convolutional neural networks model A full articulamentum, the third rank convolutional neural networks model increase 1 volume than the second-order convolutional neural networks model Lamination and 1 pond layer.
8. method for detecting human face according to claim 7, which is characterized in that the step of the acquisition target image, comprising:
Image to be detected is zoomed in and out, obtains that there is various sizes of first image, the second image and third image, and described The size of first image, second image and the third image is sequentially increased;Wherein,
The first image is the target image of the first rank convolutional neural networks model, and second image is described second The target image of rank convolutional neural networks model, the third image are the target figure of the third rank convolutional neural networks model Picture.
9. method for detecting human face according to claim 2, which is characterized in that described to obtain in sample database to The step of target sample data of preceding convolutional neural networks model, comprising:
Using the candidate frame in the current convolutional neural networks model, the of preset quantity is acquired from the sample database A kind of sample, the second class sample, third class sample and the 4th class sample are as the target sample data;Wherein,
The first kind sample be the total image-region of facial image region Zhan ratio less than the first ratio sample, described second Class sample is that the ratio of the total image-region of facial image region Zhan is greater than the sample of the second ratio, and the third class sample is face The ratio of the total image-region of image-region Zhan is greater than or equal to the first ratio, and is less than or equal to the sample of the second ratio, described 4th class sample is the sample for including facial image and face feature point.
10. method for detecting human face according to claim 2, which is characterized in that the method also includes:
During based on the target sample data to the current convolutional neural networks model training, each iteration is obtained Sequence after processing according to penalty values from big to small, the input sample that the N number of sample being arranged in front is handled as next iteration, Until the penalty values are less than default loss threshold value.
11. a kind of human face detection device characterized by comprising
Module is obtained, for obtaining target image;
First processing module, for using multistage concatenated convolutional neural network model to the position of face in the target image and Characteristic point is detected, and face information is obtained.
12. a kind of user equipment, including transceiver, memory, processor and it is stored on the memory and can be at the place The computer program run on reason device;It is characterized in that, being realized when the processor executes the computer program as right is wanted Seek the described in any item method for detecting human face of 1-10.
13. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program It realizes when being executed by processor such as the step in the described in any item method for detecting human face of claim 1-10.
CN201810058182.6A 2018-01-22 2018-01-22 A kind of method for detecting human face, device and user equipment Pending CN110069959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810058182.6A CN110069959A (en) 2018-01-22 2018-01-22 A kind of method for detecting human face, device and user equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810058182.6A CN110069959A (en) 2018-01-22 2018-01-22 A kind of method for detecting human face, device and user equipment

Publications (1)

Publication Number Publication Date
CN110069959A true CN110069959A (en) 2019-07-30

Family

ID=67364523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810058182.6A Pending CN110069959A (en) 2018-01-22 2018-01-22 A kind of method for detecting human face, device and user equipment

Country Status (1)

Country Link
CN (1) CN110069959A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852321A (en) * 2019-11-11 2020-02-28 北京百度网讯科技有限公司 Candidate frame filtering method and device and electronic equipment
CN111160263A (en) * 2019-12-30 2020-05-15 中国电子科技集团公司信息科学研究院 Method and system for obtaining face recognition threshold
CN111862040A (en) * 2020-07-20 2020-10-30 中移(杭州)信息技术有限公司 Portrait picture quality evaluation method, device, equipment and storage medium
CN112651490A (en) * 2020-12-28 2021-04-13 深圳万兴软件有限公司 Training method and device for face key point detection model and readable storage medium
CN113486807A (en) * 2021-07-08 2021-10-08 网易(杭州)网络有限公司 Face detection model training method, face detection model recognition device, face detection medium and face detection equipment
CN114644276A (en) * 2022-04-11 2022-06-21 伊萨电梯有限公司 Intelligent elevator control method under mixed scene condition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN105912990A (en) * 2016-04-05 2016-08-31 深圳先进技术研究院 Face detection method and face detection device
CN106991408A (en) * 2017-04-14 2017-07-28 电子科技大学 The generation method and method for detecting human face of a kind of candidate frame generation network
CN107220618A (en) * 2017-05-25 2017-09-29 中国科学院自动化研究所 Method for detecting human face and device, computer-readable recording medium, equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN105912990A (en) * 2016-04-05 2016-08-31 深圳先进技术研究院 Face detection method and face detection device
CN106991408A (en) * 2017-04-14 2017-07-28 电子科技大学 The generation method and method for detecting human face of a kind of candidate frame generation network
CN107220618A (en) * 2017-05-25 2017-09-29 中国科学院自动化研究所 Method for detecting human face and device, computer-readable recording medium, equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852321A (en) * 2019-11-11 2020-02-28 北京百度网讯科技有限公司 Candidate frame filtering method and device and electronic equipment
CN110852321B (en) * 2019-11-11 2022-11-22 北京百度网讯科技有限公司 Candidate frame filtering method and device and electronic equipment
CN111160263A (en) * 2019-12-30 2020-05-15 中国电子科技集团公司信息科学研究院 Method and system for obtaining face recognition threshold
CN111160263B (en) * 2019-12-30 2023-09-05 中国电子科技集团公司信息科学研究院 Method and system for acquiring face recognition threshold
CN111862040A (en) * 2020-07-20 2020-10-30 中移(杭州)信息技术有限公司 Portrait picture quality evaluation method, device, equipment and storage medium
CN111862040B (en) * 2020-07-20 2023-10-31 中移(杭州)信息技术有限公司 Portrait picture quality evaluation method, device, equipment and storage medium
CN112651490A (en) * 2020-12-28 2021-04-13 深圳万兴软件有限公司 Training method and device for face key point detection model and readable storage medium
CN112651490B (en) * 2020-12-28 2024-01-05 深圳万兴软件有限公司 Training method and device for human face key point detection model and readable storage medium
CN113486807A (en) * 2021-07-08 2021-10-08 网易(杭州)网络有限公司 Face detection model training method, face detection model recognition device, face detection medium and face detection equipment
CN113486807B (en) * 2021-07-08 2024-02-27 网易(杭州)网络有限公司 Face detection model training method, face detection model recognition device, face detection model training medium and face detection model training equipment
CN114644276A (en) * 2022-04-11 2022-06-21 伊萨电梯有限公司 Intelligent elevator control method under mixed scene condition
CN114644276B (en) * 2022-04-11 2022-12-02 伊萨电梯有限公司 Intelligent elevator control method under mixed scene condition

Similar Documents

Publication Publication Date Title
CN110069959A (en) A kind of method for detecting human face, device and user equipment
CN108256544B (en) Picture classification method and device, robot
CN108764164B (en) Face detection method and system based on deformable convolution network
CN106874840B (en) Vehicle information recognition method and device
CN109583322B (en) Face recognition deep network training method and system
CN109977943A (en) A kind of images steganalysis method, system and storage medium based on YOLO
CN111126472A (en) Improved target detection method based on SSD
KR20180004898A (en) Image processing technology and method based on deep learning
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
CN107871100A (en) The training method and device of faceform, face authentication method and device
CN111797983A (en) Neural network construction method and device
CN107871101A (en) A kind of method for detecting human face and device
CN104834933A (en) Method and device for detecting salient region of image
CN108846826A (en) Object detecting method, device, image processing equipment and storage medium
KR20040037180A (en) System and method of face recognition using portions of learned model
CN104537647A (en) Target detection method and device
KR20190123372A (en) Apparatus and method for robust face recognition via hierarchical collaborative representation
CN109472209A (en) A kind of image-recognizing method, device and storage medium
CN109919252A (en) The method for generating classifier using a small number of mark images
CN110263731B (en) Single step human face detection system
CN108960404A (en) A kind of people counting method and equipment based on image
CN111696136B (en) Target tracking method based on coding and decoding structure
CN110879982A (en) Crowd counting system and method
CN110222718A (en) The method and device of image procossing
CN104881682A (en) Image classification method based on locality preserving mapping and principal component analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190730