CN109389030A - Facial feature points detection method, apparatus, computer equipment and storage medium - Google Patents
Facial feature points detection method, apparatus, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109389030A CN109389030A CN201810963841.0A CN201810963841A CN109389030A CN 109389030 A CN109389030 A CN 109389030A CN 201810963841 A CN201810963841 A CN 201810963841A CN 109389030 A CN109389030 A CN 109389030A
- Authority
- CN
- China
- Prior art keywords
- picture
- face
- characteristic point
- preset
- obtains
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Abstract
The invention discloses a kind of facial feature points detection method, apparatus, computer equipment and storage mediums.The described method includes: sample data set is divided into training dataset and test data set according to preset division proportion;The Face datection model comprising K parallel-convolution layer, splicing layer and global pool layer is trained using training dataset;Face datection model is tested using test data set, and according to test result calculations Face datection model to the locating accuracy of human face characteristic point;If locating accuracy is less than preset accuracy rate threshold value, sample data set is divided again, and re -training and test, until locating accuracy is more than or equal to preset accuracy rate threshold value;Face picture to be detected is inputted trained Face datection model to calculate, obtains the characteristic point prediction result of face picture.Technical solution of the present invention can effectively improve Face datection model to the stationkeeping ability and predictablity rate of human face characteristic point.
Description
Technical field
The present invention relates to computer field more particularly to a kind of facial feature points detection method, apparatus, computer equipment and
Storage medium.
Background technique
Currently, recognition of face has been widely used in various practical applications, authentication is carried out by recognition of face
It is increasingly becoming common authentication mode, in face recognition process, the detection of human face characteristic point is recognition of face and phase
Close the premise and basis of application.
In the existing depth model design process detected to human face characteristic point, in order to be suitable for practical application field
Scape, expends the less execution time, it usually needs depth model is designed to mini Mod, it is still, existing to use this small mould
The depth model predictive ability of type design method is poor, and predictablity rate is not high, makes model that fuzzy face, big angle can not be accurately positioned
Spend the characteristic point of the faces such as face, exaggeration expression face.
Summary of the invention
The embodiment of the present invention provides a kind of facial feature points detection method, apparatus, computer equipment and storage medium, with solution
The certainly current depth model problem lower to the predictablity rate of human face characteristic point.
A kind of facial feature points detection method, comprising:
Obtain sample data set, wherein the sample data set includes face samples pictures and each face sample
The human face characteristic point markup information of picture;
According to preset division proportion, the sample data set is divided into training dataset and test data set;
Initial Face detection model is trained using the training dataset, the Face datection mould trained
Type, wherein the Initial Face detection model is the convolutional Neural comprising K parallel-convolution layer, splicing layer and global pool layer
Network, each parallel-convolution layer have the visual perception range of different default scales, and K is the positive integer more than or equal to 3;
The Face datection model trained is tested using the test data set, and according to test result meter
The Face datection model trained is calculated to the locating accuracy of human face characteristic point;
If the locating accuracy is less than preset accuracy rate threshold value, the people that the sample data is concentrated again
Face samples pictures are divided, and obtain new training dataset and new test data set, and use the new training data
Collection is trained the Face datection model trained, to update the Face datection model trained, using described
New test data set tests the Face datection model trained, until the locating accuracy is greater than or waits
Until the preset accuracy rate threshold value;
If the locating accuracy is greater than or equal to the preset accuracy rate threshold value, locating accuracy is greater than or is waited
It is determined as trained Face datection model in the Face datection model trained of the preset accuracy rate threshold value;
Obtain face picture to be detected;
The face picture to be detected is inputted the trained Face datection model to calculate, obtains the people
The characteristic point prediction result of face picture, wherein the characteristic point prediction result includes attribute information and the position of target feature point
Information.
A kind of facial feature points detection device, comprising:
First obtain module, for obtaining sample data set, wherein the sample data set include face samples pictures and
The human face characteristic point markup information of each face samples pictures;
Sample division module, for according to preset division proportion, the sample data set to be divided into training dataset
And test data set;
Model training module is instructed for being trained using the training dataset to Initial Face detection model
The Face datection model practiced, wherein the Initial Face detection model is comprising K parallel-convolution layer, splicing layer and the overall situation
The convolutional neural networks of pond layer, each parallel-convolution layer have a visual perception ranges of different default scales, K be greater than
Positive integer equal to 3;
Model measurement module, for being surveyed using the test data set to the Face datection model trained
Examination, and the Face datection model trained according to test result calculations is to the locating accuracy of human face characteristic point;
Model optimization module, if being less than preset accuracy rate threshold value for the locating accuracy, again to the sample
The face samples pictures that notebook data is concentrated are divided, and obtain new training dataset and new test data set, and make
The Face datection model trained is trained with the new training dataset, to update the face trained
Detection model is tested the Face datection model trained using the new test data set, until described fixed
Until position accuracy rate is more than or equal to the preset accuracy rate threshold value;
Training result module will if being greater than or equal to the preset accuracy rate threshold value for the locating accuracy
The Face datection model trained that locating accuracy is greater than or equal to the preset accuracy rate threshold value is determined as training
Good Face datection model;Second obtains module, for obtaining face picture to be detected;
Model prediction module, for will the face picture to be detected input trained Face datection model into
Row calculates, and obtains the characteristic point prediction result of the face picture, wherein the characteristic point prediction result includes target feature point
Attribute information and location information.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing
The computer program run on device, the processor realize above-mentioned facial feature points detection method when executing the computer program
The step of.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter
The step of calculation machine program realizes above-mentioned facial feature points detection method when being executed by processor.
Above-mentioned facial feature points detection method, apparatus, computer equipment and storage medium, on the one hand, building is comprising multiple
Parallel-convolution layer, the convolutional neural networks for splicing layer and global pool layer, as Face datection model, wherein parallel-convolution layer
Visual perception range with different default scales, by the visual perception range for using different scale in each parallel-convolution layer
Parallel-convolution calculating is carried out, and the calculated result of each parallel-convolution layer is stitched together by splicing layer, so that face is examined
The minutia of different scale can be captured simultaneously by surveying model, to improve the ability to express of Face datection model, also, be passed through
The pondization of global pool layer calculates, and the output result of face detection model can be made to have the characteristics that invariance relative to position,
Over-fitting is avoided simultaneously, using the network structure of above-mentioned this convolutional neural networks, can be improved Face datection model to face
The stationkeeping ability of characteristic point, especially can be quasi- to the characteristic point of the faces such as fuzzy face, wide-angle face, exaggeration expression face
Position is determined, to effectively improve the predictablity rate of Face datection model;On the other hand, by obtaining by including accurate face
Sample data set, is divided by the sample data set of the face samples pictures composition of characteristic point markup information according to preset ratio
Training dataset and test data set are trained the face detection model using training dataset, and use test data
Collection tests the Face datection model trained, then according to the accurate positioning of test result calculations Face datection model
Rate, by the predictive ability for the Face datection model that locating accuracy training of judgement is crossed, and by training dataset and test
The training to Face datection model is continued to optimize in the adjustment of data set, until reaching satisfied locating accuracy, is realized to people
The training tuning of face detection model, further enhances the predictive ability of Face datection model.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is an application environment schematic diagram of facial feature points detection method in one embodiment of the invention;
Fig. 2 is a flow chart of facial feature points detection method in one embodiment of the invention;
Fig. 3 be in one embodiment of the invention in facial feature points detection method include three parallel-convolution layers Face datection
The schematic network structure of model;
Fig. 4 is a flow chart of the step S8 of facial feature points detection method in one embodiment of the invention;
Fig. 5 is in the step S4 of facial feature points detection method in one embodiment of the invention according to test result calculations face
A flow chart of the detection model to the locating accuracy of human face characteristic point;
Fig. 6 is a flow chart of step S1 in facial feature points detection method in one embodiment of the invention;
Fig. 7 is a flow chart of step S14 in facial feature points detection method in one embodiment of the invention;
Fig. 8 is a schematic diagram of facial feature points detection device in one embodiment of the invention;
Fig. 9 is a schematic diagram of computer equipment in one embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Facial feature points detection method provided by the present application, can be applicable in application environment as shown in Figure 1, this applies ring
Border includes server-side and client, wherein is attached between server-side and client by network, which can be wired
Network or wireless network, client are specifically including but not limited to various personal computers, laptop, smart phone, put down
Plate computer and portable wearable device, the service that server-side can specifically be formed with independent server or multiple servers
Device cluster is realized.Collected sample data set and face picture to be detected are sent to server-side, server-side root by client
According to receive sample data set carry out model training, and using trained Face datection model to face picture to be detected into
The detection of row characteristic point.
In one embodiment, it as shown in Fig. 2, providing a kind of facial feature points detection method, applies in Fig. 1 in this way
Server-side for be illustrated, details are as follows:
S1: sample data set is obtained, wherein the sample data set includes face samples pictures and each face samples pictures
Human face characteristic point markup information.
Specifically, sample data set, which can be, acquires and is stored in sample database in advance, and sample data concentration includes
The human face characteristic point markup information of several face samples pictures and each face samples pictures.
It should be understood that the human face characteristic point markup information associated storage of face samples pictures and the face samples pictures exists
Sample data is concentrated.
Wherein, human face characteristic point markup information may include the attribute information and location information of human face characteristic point.Attribute letter
Breath is specially face information belonging to human face characteristic point, and location information is specially picture of the human face characteristic point in face samples pictures
Vegetarian refreshments coordinate.
For example, a specific human face characteristic point markup information is " eyes, (200,150) ", wherein " eyes " are the people
Face information, i.e. attribute information belonging to face characteristic point, " (200,150) " are the human face characteristic point in face samples pictures
Pixel coordinate, i.e. location information.
S2: according to preset division proportion, sample data set is divided into training dataset and test data set.
Specifically, according to preset division proportion, the face samples pictures that the sample data that step S1 is got is concentrated
Random division is carried out, training dataset and test data set are obtained.
For example, preset division proportion be 3:2, it is assumed that sample data concentrate include 1,000,000 face samples pictures, then from
Sample data concentrates 600,000 face samples pictures of random selection as training dataset, remaining 400,000 face samples pictures
As test data set.
It should be noted that preset division proportion can be configured according to the needs of practical application, do not limit herein
System.
S3: being trained Initial Face detection model using training dataset, the Face datection model trained,
Wherein, which is the convolutional neural networks comprising K parallel-convolution layer, splicing layer and global pool layer,
Each parallel-convolution layer has the visual perception range of different default scales, and K is the positive integer more than or equal to 3.
In the present embodiment, Initial Face detection model and the Face datection model trained, and hereinafter mention
Trained Face datection model refers both to the Face datection model comprising stacking convolutional neural networks structure, the Face datection mould
The convolutional neural networks of type include K parallel-convolution layer, splicing layer and global pool layer, and are arranged in each parallel-convolution layer
The convolution kernel of the visual perception range of the default scale of difference, wherein K parallel-convolution layer is arranged according to preset sequence, each
The output data of parallel-convolution layer is used as the input data of next parallel-convolution layer, and the output of each parallel-convolution layer
Input data of the data as splicing layer, splices input data of the output data of layer as global pool layer, global pool
The output data of layer is the output of Face datection model as a result, the output result includes the face that Face datection model prediction goes out
The attribute information and location information of human face characteristic point in samples pictures.
As shown in figure 3, Fig. 3 is the schematic network structure of a Face datection model comprising three parallel-convolution layers.
Wherein, which is respectively convolutional layer A, convolutional layer B and convolutional layer C, and each parallel-convolution layer is corresponding default
The visual perception range of scale is respectively 3 × 3,5 × 5 and 7 × 7 convolution kernel, and the unit of convolution kernel is pixel.
By carrying out parallel-convolution calculating using the visual perception range of different scale in each parallel-convolution layer, and pass through
The calculated result of each parallel-convolution layer is stitched together by splicing layer, and Face datection model is enabled to capture different rulers simultaneously
The minutia of degree to improve the ability to express of Face datection model, also, is calculated, energy by the pondization of global pool layer
Enough making the output result of face detection model has the characteristics that invariance relative to position, while avoiding over-fitting.This stacking
Convolutional neural networks structure can be improved Face datection model to the stationkeeping ability of human face characteristic point, especially to fuzzy face,
The characteristic point of the faces such as wide-angle face, exaggeration expression face can be accurately positioned, to effectively improve Face datection model
Predictablity rate.
Specifically, when being trained using training dataset to Initial Face detection model, training data is concentrated
Face samples pictures input the Initial Face detection model, according to the stacking convolutional neural networks knot of the Initial Face detection model
Structure is successively calculated, and the output result of obtained Initial Face detection model is as test result, and by test result
Study is compared between the human face characteristic point markup information of face samples pictures, which is adjusted according to the result of comparison study
The parameter of every layer network in folded convolutional neural networks structure, process repeatedly trained and parameter adjustment, the face trained
Detection model.
It further, can also be to the defeated of the parallel-convolution layer before each parallel-convolution layer carries out convolutional calculation
Enter data to be standardized, the standardization can specifically include global normalization (Batch Normalization,
BN) processing and unilateral inhibition processing.It can prevent gradient from disappearing or exploding by global normalization's processing, accelerate training speed.
Unilateral side inhibits processing to return as activation primitive to the overall situation using amendment linear unit (Rectified linear unit, ReLU)
One changes that treated, and output carries out unilateral inhibition, so that the Face datection model after sparse can be realized and more accurately excavate face
Characteristic point and fitting training data.Meanwhile convolutional calculation is carried out to the input data after standardization, meter can be effectively reduced
Calculation amount improves computational efficiency.
S4: the Face datection model trained is tested using test data set, and is instructed according to test result calculations
Locating accuracy of the Face datection model practiced to human face characteristic point.
Specifically, the face samples pictures that test data is concentrated are input to the Face datection trained that step S3 is obtained
It is tested in model, obtains the test result of face characteristic model output, which includes every in face samples pictures
The predicted position information of a human face characteristic point.
For each face samples pictures, by the people of the test result by face samples pictures and the face samples pictures
The actual position information of each of face characteristic point markup information face characteristic point is compared, and judges whether test result is quasi-
Really, judging result is obtained, and concentrates the judging result of everyone face samples pictures according to test data, calculates the face trained
Locating accuracy of the detection model to human face characteristic point.
In one embodiment, judging result may include correct and wrong two values, when the survey of face samples pictures
Test result is consistent with the human face characteristic point markup information of the face samples pictures, then judging result is correct, otherwise judging result
For mistake, concentrating Statistic analysis result in test data is the quantity of correct face samples pictures, and by the quantity and is tested
The ratio of the sum for the face samples pictures that data set includes is as locating accuracy.
Further, normalization mean error (the normalized mean for calculating test data set can also be used
Error, NME) mode obtain locating accuracy.
S5: if locating accuracy is less than preset accuracy rate threshold value, the face sample graph that sample data is concentrated again
Piece is divided, obtain new training dataset and new test data set, and using new training dataset to training
Face datection model is trained, to update the Face datection model trained, using new test data set to training
Face datection model is tested, until locating accuracy is more than or equal to preset accuracy rate threshold value.
Specifically, the locating accuracy that step S4 is obtained is compared with preset accuracy rate threshold value, if accurate positioning
Rate is less than accuracy rate threshold value, then confirms that the training for the Face datection model trained does not complete, need to continue the people trained to this
Face detection model carries out network parameter tuning.
It randomly chooses, obtains new according to the face samples pictures that preset division proportion again concentrates sample data
Training dataset and new test data set, and use the new training dataset, trained using identical with step S3
Journey is trained the Face datection model trained, and to update the Face datection model trained, and after the completion of training, makes
With new test data set, the Face datection model trained is tested using test process identical with step S4, and
According to test result calculations locating accuracy.
If locating accuracy still less than accuracy rate threshold value, continues to repeat this step, repetition training and test, Zhi Daoding
Position accuracy rate terminates training and tests when being more than or equal to accuracy rate threshold value.
S6: if locating accuracy is greater than or equal to preset accuracy rate threshold value, locating accuracy is greater than or equal to pre-
If the Face datection model trained of accuracy rate threshold value be determined as trained Face datection model.
Specifically, if the locating accuracy that step S4 is obtained is greater than or equal to preset accuracy rate threshold value, or by step
The locating accuracy obtained after rapid S5 repetition training and test is greater than or equal to preset accuracy rate threshold value, then what is obtained at this time determines
It is trained Face datection that position accuracy rate, which is greater than or equal to the Face datection model of preset accuracy rate threshold value trained,
The detection that the trained Face datection model carries out human face characteristic point can be used in model.
S7: face picture to be detected is obtained.
Specifically, what the user that face picture to be detected is specifically as follows pending identification was inputted by client
Face picture, server-side obtain the face picture to be detected from client.
S8: face picture to be detected is inputted into trained Face datection model and is calculated, the face picture is obtained
Characteristic point prediction result, wherein characteristic point prediction result includes the attribute information and location information of target feature point.
Specifically, in the trained Face datection model face picture input step S6 that step S7 is obtained obtained,
And calculated according to the stacking convolutional neural networks structure in the trained Face datection model, obtain the trained people
The output of face detection model, the output include the target feature point in the face picture to be detected identified attribute information and
Location information.The characteristic point prediction result of face picture as to be detected.
In the present embodiment, on the one hand, convolution mind of the building comprising multiple parallel-convolution layers, splicing layer and global pool layer
Through network, as Face datection model, wherein parallel-convolution layer has the visual perception ranges of different default scales, by
Each parallel-convolution layer carries out parallel-convolution calculating using the visual perception range of different scale, and will each simultaneously by splicing layer
The calculated result of row convolutional layer is stitched together, and Face datection model is enabled to capture the minutia of different scale simultaneously,
It to improve the ability to express of Face datection model, also, is calculated by the pondization of global pool layer, Face datection mould can be made
The output result of type has the characteristics that invariance relative to position, while avoiding over-fitting, using above-mentioned this convolutional Neural net
The network structure of network can be improved Face datection model to the stationkeeping ability of human face characteristic point, especially to fuzzy face, big angle
The characteristic point of the faces such as degree face, exaggeration expression face can be accurately positioned, to effectively improve the prediction of Face datection model
Accuracy rate;On the other hand, by obtaining the sample being made of the face samples pictures comprising accurate human face characteristic point markup information
Sample data set is divided into training dataset and test data set according to preset ratio, uses training data by notebook data collection
Collection is trained the face detection model, and is tested using test data set the Face datection model trained, so
Afterwards according to the locating accuracy of test result calculations Face datection model, the Face datection crossed by locating accuracy training of judgement
The predictive ability of model, and by the adjustment to training dataset and test data set, it continues to optimize to Face datection model
Training realizes the training tuning to Face datection model, further enhances face inspection until reaching satisfied locating accuracy
Survey the predictive ability of model.
In one embodiment, as shown in figure 4, K is equal to 3, and K parallel-convolution layer includes the first convolutional layer, volume Two
Face picture to be detected is inputted trained Face datection model and counted by lamination and third convolutional layer in step s 8
It calculates, the characteristic point prediction result for obtaining the face picture specifically comprises the following steps:
S81: being standardized face picture to be detected, obtains the first human face data.
Standardization includes global normalization's processing and unilateral inhibition processing, and global normalization's processing is BN processing, is led to
Crossing global normalization's processing can effectively prevent gradient to disappear or explode;Unilateral side inhibits processing to use ReLU as activation primitive
To global normalization, treated that output image carries out unilateral inhibition, avoids over-fitting.
Specifically, after carrying out global normalization's processing and unilateral inhibition processing to face picture to be detected, first is obtained
Human face data.
S82: the first human face data is inputted into the first convolutional layer and carries out convolutional calculation, obtains the first convolution results.
Specifically, the first human face data step S81 obtained inputs the first convolutional layer and carries out convolutional calculation, the convolution meter
It calculates and convolution transform is carried out to the image array of the first human face data, which is extracted by the convolution kernel of the first convolutional layer
Feature exports characteristic pattern (Feature Map), i.e. the first convolution results.
S83: being standardized the first convolution results, obtains the second human face data.
Specifically, the first convolution results step S82 obtained continue standardization, obtain the second face number
According to.
Wherein, the course of standardization process of the first convolution results can be used at global normalization identical with step S81
Reason and unilateral inhibition treatment process, details are not described herein again.
S84: the second human face data is inputted into the second convolutional layer and carries out convolutional calculation, obtains the second convolution results.
Specifically, the second human face data step S83 obtained inputs the second convolutional layer and carries out convolutional calculation, the convolution meter
It calculates and convolution transform is carried out to the image array of the second human face data, which is extracted by the convolution kernel of the second convolutional layer
Feature exports the second convolution results.
S85: being standardized the second convolution results, obtains third human face data.
Specifically, the second convolution results step S84 obtained continue standardization, obtain third face number
According to.
Wherein, the course of standardization process of the second convolution results can be used at global normalization identical with step S81
Reason and unilateral inhibition treatment process, details are not described herein again.
S86: third human face data input third convolutional layer is subjected to convolutional calculation, obtains third convolution results.
Specifically, third human face data input third convolutional layer step S85 obtained carries out convolutional calculation, the convolution meter
It calculates and convolution transform is carried out to the image array of third human face data, which is extracted by the convolution kernel of third convolutional layer
Feature exports third convolution results.
It should be noted that the convolution kernel size of the first convolutional layer, the convolution kernel size of the second convolutional layer and third volume
The convolution kernel size of lamination can be configured previously according to the needs of practical application, from each other can it is identical can also be with
It is not identical, herein with no restrictions.
S87: inputting splicing layer for the first convolution results, the second convolution results and third convolution results and carry out splicing calculating,
Obtain convolution output result.
Specifically, by the first convolution results that step S82 is obtained, step S84 obtains the second convolution results and step S86 is obtained
To third convolution results simultaneously be input to splicing layer carry out splicing calculating, obtain convolution output result.
S88: convolution output result input global pool layer is subjected to pondization and is calculated, the spy of face picture to be detected is obtained
Levy point prediction result.
Specifically, convolution output result input global pool layer step S87 obtained carries out pondization and calculates, and obtains to be checked
The face picture characteristic point prediction result of survey.
Since the characteristic parameter number for including in convolution output result is more, at the same there is likely to be no practical significance or
The lengthy and jumbled features such as person's repetition, therefore calculated by the pondization of global pool layer, lengthy and jumbled feature can be screened out, is reduced unnecessary
Parameter avoids over-fitting.
Further, using maximum pond (Max Pooling) method or average pond (mean pooling) method into
Row pondization calculates.Wherein, maximum pond method is using the maximum value of feature graph region as the value after the pool area.Average pond
Change method is pond result of the average value of calculating feature graph region as the region.
In the present embodiment, when Face datection model includes three parallel-convolution layers, face picture to be detected is carried out
First human face data after obtaining the first human face data, is inputted the first convolutional layer and carries out convolutional calculation, obtain the by standardization
One convolution obtains the second human face data as a result, then continue standardization to the first convolution results, then by the second face
Data input the second convolutional layer and carry out convolutional calculation, obtain the second convolution as a result, then continuing to mark to the second convolution results
Quasi-ization processing obtains third human face data, then third human face data input third convolutional layer is carried out convolutional calculation, obtains third
The output of three parallel convolutional layers is input to splicing layer later and carries out splicing calculating by convolution results, obtains convolution output knot
Convolution output result input global pool layer is finally carried out pondization and calculated, obtains the characteristic point of face picture to be detected by fruit
Prediction result, calculating of the face picture to be detected by the network structure of above-mentioned this convolutional neural networks, can accurately determine
Position goes out human face characteristic point, especially can be accurate to the characteristic point of the faces such as fuzzy face, wide-angle face, exaggeration expression face
Positioning, to effectively improve the predictablity rate of Face datection model.
In one embodiment, as shown in figure 5, in step s 4, the Face datection mould trained according to test result calculations
Type specifically comprises the following steps: the locating accuracy of human face characteristic point
S41: according to test result, the normalization that the corresponding test data of the test result concentrates each test sample is calculated
Mean error.
Specifically, test result includes each face characteristic in the test sample of the corresponding test data set of the test result
The predicted position information of point, normalization mean error (the normalized mean of each test sample is calculated according to following formula
Error, NME):
Wherein, P is the normalization mean error of each test sample, and N is the reality of the human face characteristic point of the test sample
Quantity, xkFor the actual position information of k-th of human face characteristic point of the test sample, ykFor kth in the test result of the test sample
The predicted position information of a human face characteristic point, | xk-yk| between the physical location and predicted position of k-th of human face characteristic point
Distance, d are the facial image size of the test sample.Actual position information and predicted position information specifically can be coordinate letter
Breath, facial image size specifically can be the elemental area of face picture.
S42: carrying out average segmentation according to preset interval numerical value for preset error threshold, obtain P sub- threshold values,
In, P is positive integer.
Specifically, by from 0 to the numerical value preset error threshold, average mark is carried out according to preset interval numerical value
It cuts, obtains P sub- threshold values.
It should be noted that preset error threshold and preset interval numerical value can according to the needs of practical application into
Row setting, herein with no restrictions.
For example, preset error threshold is 0.07, preset interval numerical value is 0.001, then by the numerical value between 0 to 0.07
Average segmentation is carried out according to 0.001 interval, obtains 70 sub- threshold values.
It should be noted that do not have between step S41 and step S42 it is inevitable it is successive execute sequence, be also possible to simultaneously
The relationship executed is arranged, herein with no restrictions.
S43: statistics normalization mean error is less than the statistical magnitude of the test sample of every sub- threshold value, and calculates the statistics
Quantity accounts for the percentage that the corresponding test data of test result concentrates test sample sum, obtains P percentages.
Specifically, for the normalization mean error of the obtained each test sample of step S41, by returning for the test sample
One change mean error is compared with every sub- threshold value, and is counted normalization mean error according to comparison result and be less than every sub- threshold
The statistical magnitude of the test sample of value calculates statistical magnitude test data corresponding with test result and concentrates test sample sum
Between quotient, obtain P quotient, i.e. P percentages.
For example, preset interval numerical value is 0.05 if preset error threshold is 0.2, then P is 4,4 sub- threshold value difference
It is 0.05,0.1,0.15 and 0.2.Assuming that the test sample sum that the corresponding test data set of test result includes is 10, often
The normalization mean error of a test sample is respectively 0.003,0.12,0.06,0.07,0.23,0.18,0.11,0.04,0.09
With 0.215.Then statistics can obtain:
Normalization mean error less than 0.05 is 0.003 and 0.04, i.e. normalization test of the mean error less than 0.05
The statistical magnitude of sample is 2;
Normalization mean error less than 0.1 is 0.003,0.075,0.04,0.06,0.07 and 0.09, i.e. normalization is flat
The statistical magnitude of equal test sample of the error less than 0.1 is 6;
Normalization mean error less than 0.15 is 0.003,0.075,0.04,0.06,0.07,0.09 and 0.11, that is, is returned
One statistical magnitude for changing test sample of the mean error less than 0.15 is 7;
Normalization mean error less than 0.2 is 0.003,0.075,0.04,0.06,0.07,0.09,0.11 and 0.18,
The statistical magnitude for normalizing test sample of the mean error less than 0.2 is 8;
According to 4 percentages that the calculation of this step obtains be respectively as follows: 2/10=20%, 6/10=60%,
7/10=70% and 8/10=80%.
S44: the average value of P percentages is calculated, and using the average value as locating accuracy.
Specifically, the P percentages obtained according to step S43, calculate the arithmetic average of the P percentages
Value, which is locating accuracy.
Continuation is illustrated with the example of step S43, and the average value of 4 percentages is (20%+60%+70%+
80%)/4=57.8%.
In the present embodiment, by calculating the normalization mean error of test sample, and by preset error threshold according to pre-
If interval numerical value carry out average segmentation, then statistics normalization mean error is less than the statistics of the test sample of every sub- threshold value
Quantity, and calculate the statistical magnitude and account for the percentage that the corresponding test data of test result concentrates test sample sum, obtain P
Percentages, using the arithmetic mean of instantaneous value of P percentages as locating accuracy, calculation method through this embodiment is obtained
To locating accuracy can be objective and accurate the Face datection model trained of reflection to the prediction order of accuarcy of characteristic point, into
And accurate judgment basis is provided for further model training parameter optimization.
In one embodiment, as shown in fig. 6, in step sl, obtaining sample data set and specifically comprising the following steps:
S11: video data and picture are obtained.
Specifically, video data is obtained from preset video source channel, wherein video source channel can be in monitoring device
The video data etc. collected in the video data that is saved in the video data of recording, server database, Video Applications.From default
Picture source channel obtain picture, wherein picture source channel can be to be prestored in picture, server database disclosed in internet
Picture etc..
It should be understood that the video data and picture that get are multiple.
S12: frequency and preset maximum frame number are extracted according to preset frame, target video frame figure is extracted from video data
Picture.
Specifically, each video data that step S11 is got is handled, according to preset frame extraction frequency and in advance
If maximum frame number, since the predeterminated position of the video data extract frame image, obtain target video frame image.Wherein, in advance
If position can be the first frame position of video data, it is also possible to other positions, herein with no restrictions.
It usually can be set to extract 1 frame at random in per continuous 2 frame image it should be noted that preset frame extracts frequency
Image, preset maximum frame number is usually empirical value, and value range can be between 1700 to 1800, and but it is not limited to this,
Preset frame extracts frequency and preset maximum frame number and can be configured according to the needs of practical application, does not limit herein
System.
For example, it is assumed that it is to extract 1 frame image, preset maximum at random in every continuous 5 frame image that preset frame, which extracts frequency,
Frame number is 1800, if the totalframes of video data is 2500 frames, and is extracted since the first frame of the video data, then target regards
The quantity of frequency frame image is 500 frames.
S13: human face characteristic point mark is carried out to target video frame image and picture respectively, respectively obtains target video frame figure
The human face characteristic point markup information of picture and the human face characteristic point markup information of picture.
Specifically,
Human face characteristic point mark is carried out to each target video frame image that step S12 is obtained, obtains each target video
The human face characteristic point markup information of frame image, meanwhile, human face characteristic point mark is carried out to the picture that step S11 is obtained, is obtained every
The human face characteristic point markup information of a picture, wherein human face characteristic point markup information include human face characteristic point attribute information and
Location information.Attribute information is specially face information belonging to human face characteristic point, and location information is specially human face characteristic point in people
Pixel coordinate in face samples pictures.Further, it is mutually tied using preset human face characteristic point annotation tool with manual synchronizing
The mode of conjunction realizes that details are as follows to the human face characteristic point mark of target video frame image and picture:
(1) target video frame image and picture are inputted into preset human face characteristic point annotation tool respectively, passes through the face
Characteristic point annotation tool carries out human face characteristic point mark to the face in target video frame image and picture respectively, obtains the first mark
Infuse result.
Wherein, preset human face characteristic point annotation tool specifically can be the existing human face characteristic point that can be realized and mark function
The neural network tool of energy, human face characteristic point includes the face features such as ear, eyebrow, eyes, nose, lip and shape of face.
Since the mark accuracy of the existing neural network tool that can be realized human face characteristic point marking Function is lower, because
This, needs further progress manual synchronizing.
(2) the first annotation results are sent to target user to confirm and adjust, and receive the school of target user's return
Positive information is updated according to information of the control information to marking error in the first annotation results, and it is special to obtain accurate face
Sign point markup information.
S14: according to preset processing method, being processed picture, and the face for obtaining new picture and new picture is special
Sign point markup information.
Specifically, preset processing method includes but is not limited to flip horizontal, Random-Rotation clockwise, counterclockwise random rotation
Turn, translation, scaling and brightness increase and decrease etc., after being processed according to preset processing method to the step S11 picture obtained,
Obtain new picture, and according to the location information in processing method accordingly the human face characteristic point markup information of synchronized update picture,
Obtain the human face characteristic point markup information of new picture.
It should be noted that by being processed to picture according to preset processing method, obtain new picture and its
Corresponding human face characteristic point markup information can rapidly enrich sample data set, and not need to repeat in step S13
Human face characteristic point markup information annotation process, provide rich and varied face sample for the training and test of face detection model
This picture, it is ensured that the diversity and harmony of sample, so as to preferably support the training and test of Face datection model.
S15: target video frame image, picture and new picture are regard as face samples pictures.
Specifically, the obtained picture of target video frame image that step S12 is obtained, step S11 and step S14 are obtained
New picture is used as the face samples pictures of sample data set, the human face characteristic point of target video frame image, picture and new picture
Markup information is the human face characteristic point markup information of face samples pictures.
In the present embodiment, on the one hand, by carrying out video frame extraction to video data, and to obtained target video frame
Image carries out human face characteristic point mark, since the variation of human face posture in the sequential frame image of video data is smaller, utilizes
The mode that preset human face characteristic point annotation tool and manual synchronizing combine carries out human face characteristic point to target video frame image
When mark, low cost and accurately mark can be realized, obtain a large amount of accurate sample datas, meanwhile, extracting target video
Frequency is extracted by setting frame when frame image, the posture and expression shape change of continuous multiple frames face in video data is avoided not cause very much
Data diversity it is insufficient, avoiding long video from accounting for by setting maximum frame number leads to the over-fitting of Face datection model;
On the other hand, by being processed to picture, image data augmentation is extended to video data equivalent amount grade.This reality
Applying example realizes while the mark cost of reduction face samples pictures, obtains the sample number comprising enriching face samples pictures
According to collection, can effectively support the training and test of Face datection model, thus improve Face datection model training accuracy rate and
Predictive ability.
In one embodiment, as shown in fig. 7, in step S14, according to preset processing method, picture is processed
Processing, the human face characteristic point markup information for obtaining new picture and new picture specifically comprise the following steps:
S141: carrying out flip horizontal processing to picture, obtains the human face characteristic point mark letter of the first picture and the first picture
Breath.
Specifically, flip horizontal processing is carried out to picture, and to each face in the human face characteristic point markup information of picture
The location information of characteristic point synchronizes corresponding adjustment according to the corresponding relationship of flip horizontal, obtains the people of the first picture and the first picture
Face characteristic point markup information.
It is understood that the quantity of picture and the first picture is identical, at this time by the number of the quantity of picture and the first picture
The summation of amount is as the first quantity, then the first quantity is 2 times of the quantity of picture.
S142: according to preset rotation mode, rotation processing is carried out to picture and the first picture respectively, obtains second picture
With the human face characteristic point markup information of second picture.
Specifically, according to preset rotation mode, the first obtained picture of picture and step S141 is rotated respectively
Processing, obtains second picture, and to the position of each human face characteristic point in the human face characteristic point markup information of picture and the first picture
Confidence breath, synchronizes corresponding adjustment according to the corresponding relationship of the preset rotation mode, obtains the human face characteristic point mark of second picture
Infuse information.
It should be noted that preset rotation mode specifically can be Random-Rotation clockwise or Random-Rotation counterclockwise
Deng but it is not limited to this, can be configured according to the needs of practical application, herein with no restrictions.
It is understood that if preset rotation mode is two kinds of sides of Random-Rotation clockwise and Random-Rotation counterclockwise
Formula, the then quantity of the second picture obtained are 4 times of the quantity of picture, at this time by the total of the quantity of second picture and the first quantity
With as the second quantity, then the second quantity is 6 times of the quantity of picture.
S143: according to preset offset and preset scaling, respectively in picture, the first picture and second picture
Face rectangle frame successively carry out translation processing and scaling processing, obtain the human face characteristic point mark of third picture and third picture
Information.
Specifically, according to preset offset, respectively to the face rectangle frame in picture, the first picture and second picture into
Row translation processing, then according still further to preset scaling, in translation treated picture, the first picture and second picture
Face rectangle frame zooms in and out processing, obtains third picture, meanwhile, according to pair of preset offset and preset scaling
It should be related to, synchronize the location information of each human face characteristic point in corresponding adjustment human face characteristic point markup information, obtain third picture
Human face characteristic point markup information.
Wherein, preset offset and preset scaling can be the random value in a preset range.
It is understood that the quantity of third picture is 2 × 3 × 2=12 times of the quantity of picture.
S144: it according to preset withdrawal ratio, is randomly selected from picture, the first picture, second picture and third picture
Target Photo, and random brightness change process is carried out to Target Photo, obtain the human face characteristic point of the 4th picture and the 4th picture
Markup information.
Specifically, second picture, the step S143 that the first picture, the step S142 obtained from step S141 is obtained are obtained
In third picture and picture, according to preset withdrawal ratio, Target Photo is randomly selected.The Target Photo selected is carried out
Random brightness change process obtains the 4th picture, and the human face characteristic point markup information of Target Photo is the 4th picture
Human face characteristic point markup information.
Wherein, random brightness change process includes carrying out at brightness increase or brightness reduction to randomly selected pixel
Reason, increasing degree and reduction amplitude can be randomly generated, can also be determined by preset amplitude threshold.Preset withdrawal ratio is usual
It can be set to 30%, but it is not limited to this, can specifically be configured according to the needs of practical application.
It is understood that the quantity of the 4th picture is the 12 of the quantity of picture when preset withdrawal ratio is 30%
× 1.3=15.6 times.
S145: the first picture, second picture, third picture and the 4th picture are regard as new picture.
Specifically, obtained second picture, the step S143 of the first picture that step S141 is obtained, step S142 is obtained
The 4th picture that third picture and step S144 are obtained is used as new picture, the first picture, second picture, third picture and
The human face characteristic point markup information of 4th picture is the human face characteristic point markup information of new picture.
For example, it is assumed that the quantity of the picture got is 3300, then, it is obtained after progress augmentation through this embodiment
The quantity of new picture is about 50,000, has effectively expanded sample data set.
In the present embodiment, by carrying out a series of flip horizontal processing, rotation processing, translation processing, contracting to picture
Processing and random brightness change process etc. are put, so that the quantity of obtained new picture increases in series, is not increasing face characteristic
On the basis of the mark cost of point markup information, rapid expansion sample data set improves the acquisition efficiency of sample data set, and obtains
To the sample data set comprising enriching face samples pictures, the training and test of Face datection model can be effectively supported, thus
Improve the training accuracy rate and predictive ability of Face datection model.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
In one embodiment, a kind of facial feature points detection device is provided, the human face characteristic point detection device and above-mentioned reality
Facial feature points detection method in example is applied to correspond.As shown in figure 8, the human face characteristic point detection device includes the first acquisition mould
Block 81, sample division module 82, model training module 83, model measurement module 84, model optimization module 85, training result module
86, second module 87 and model prediction module 88 are obtained.Detailed description are as follows for each functional module:
First obtain module 81, for obtaining sample data set, wherein the sample data set include face samples pictures and
The human face characteristic point markup information of each face samples pictures;
Sample division module 82, for according to preset division proportion, by sample data set be divided into training dataset and
Test data set;
Model training module 83 is trained for being trained using training dataset to Initial Face detection model
The Face datection model crossed, wherein the Initial Face detection model is comprising K parallel-convolution layer, splicing layer and global pool
The convolutional neural networks of layer, each parallel-convolution layer have the visual perception range of different default scales, and K is more than or equal to 3
Positive integer;
Model measurement module 84, for being tested using test data set the Face datection model trained, and root
Locating accuracy of the Face datection model trained according to test result calculations to human face characteristic point;
Model optimization module 85, if being less than preset accuracy rate threshold value for locating accuracy, again to sample data
The face samples pictures of concentration are divided, and obtain new training dataset and new test data set, and use new training
Data set is trained the Face datection model trained, and to update the Face datection model trained, uses new test
Data set tests the Face datection model trained, until locating accuracy is more than or equal to preset accuracy rate threshold
Until value;
Training result module 86 will positioning standard if being greater than or equal to preset accuracy rate threshold value for locating accuracy
The Face datection model trained that true rate is greater than or equal to preset accuracy rate threshold value is determined as trained Face datection mould
Type;
Second obtains module 87, for obtaining face picture to be detected;
Model prediction module 88, based on carrying out the trained Face datection model of face picture to be detected input
It calculates, obtains the characteristic point prediction result of the face picture, wherein characteristic point prediction result includes the attribute information of target feature point
And location information.
Further, K is equal to 3, and K parallel-convolution layer includes the first convolutional layer, the second convolutional layer and third convolution
Layer, model prediction module 88 include:
First normalizer module 881 obtains the first face for being standardized to face picture to be detected
Data;
First convolution computational submodule 882 carries out convolutional calculation for the first human face data to be inputted the first convolutional layer, obtains
To the first convolution results;
Second normalizer module 883 obtains the second face number for being standardized to the first convolution results
According to;
Second convolution computational submodule 884 carries out convolutional calculation for the second human face data to be inputted the second convolutional layer, obtains
To the second convolution results;
Third normalizer module 885 obtains third face number for being standardized to the second convolution results
According to;
Third convolutional calculation submodule 886 is obtained for third human face data input third convolutional layer to be carried out convolutional calculation
To third convolution results;
Splice submodule 887, for the first convolution results, the second convolution results and third convolution results to be inputted splicing layer
Splicing calculating is carried out, convolution output result is obtained;
Pond beggar's module 888 is calculated for convolution output result input global pool layer to be carried out pondization, is obtained to be detected
Face picture characteristic point prediction result.
Further, model measurement module 84 includes:
Error calculation submodule 841 is concentrated often for according to test result, calculating the corresponding test data of the test result
The normalization mean error of a test sample;
Threshold segmentation submodule 842, for preset error threshold to be carried out average segmentation according to preset interval numerical value,
Obtain P sub- threshold values, wherein P is positive integer;
Accounting computational submodule 843, for counting the test sample for stating normalization mean error less than every sub- threshold value
Statistical magnitude, and calculate the statistical magnitude and account for the percentage that the corresponding test data of test result concentrates test sample sum, it obtains
To P percentages;
Accuracy rate computational submodule 844, for calculating the average value of P percentages, and using the average value as fixed
Position accuracy rate.
Further, the first acquisition module 81 includes:
Data acquisition submodule 811, for obtaining video data and picture;
Video frame extraction submodule 812, for extracting frequency and preset maximum frame number according to preset frame, from video counts
According to middle extraction target video frame image;
Submodule 813 is marked, for carrying out human face characteristic point mark to target video frame image and picture respectively, respectively
To the human face characteristic point markup information of target video frame image and the human face characteristic point markup information of picture;
Picture processes submodule 814, for being processed to picture, newly being schemed according to preset processing method
The human face characteristic point markup information of piece and new picture;
Sample augmentation submodule 815, for regarding target video frame image, picture and new picture as face sample graph
Piece.
Further, picture processing submodule 814 includes:
Submodule 8141 is overturn, for carrying out flip horizontal processing to picture, obtains the people of the first picture and the first picture
Face characteristic point markup information;
Submodule 8142 is rotated, for being carried out at rotation to picture and the first picture respectively according to preset rotation mode
Reason, obtains the human face characteristic point markup information of second picture and second picture;
Translation scaling submodule 8143, for according to preset offset and preset scaling, respectively to picture, the
Face rectangle frame in one picture and second picture successively carries out translation processing and scaling processing, obtains third picture and third figure
The human face characteristic point markup information of piece;
Brightness processed submodule 8144, for according to preset withdrawal ratio, from picture, the first picture, second picture and
It randomly selects Target Photo in third picture, and random brightness change process is carried out to Target Photo, obtain the 4th picture and the
The human face characteristic point markup information of four pictures;
Newly-increased sample submodule 8145, for the first picture, second picture, third picture and the 4th picture to be used as newly
Picture.
Specific restriction about facial feature points detection device may refer to above for facial feature points detection method
Restriction, details are not described herein.Modules in above-mentioned facial feature points detection device can be fully or partially through software, hard
Part and combinations thereof is realized.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment,
It can also be stored in a software form in the memory in computer equipment, execute the above modules in order to which processor calls
Corresponding operation.
In one embodiment, a kind of computer equipment is provided, which can be server, internal structure
Figure can be as shown in Figure 9.The computer equipment includes processor, the memory, network interface sum number connected by system bus
According to library.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory of the computer equipment includes
Non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is for storing sample data set.The network interface of the computer equipment is used to pass through net with external terminal
Network connection communication.To realize a kind of facial feature points detection method when the computer program is executed by processor.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory simultaneously
The computer program that can be run on a processor, processor realize that above-described embodiment human face characteristic point is examined when executing computer program
The step of survey method, such as step S1 shown in Fig. 2 to step S8.Alternatively, being realized when processor execution computer program above-mentioned
The function of each module/unit of facial feature points detection device in embodiment, such as module 81 shown in Fig. 8 is to the function of module 88
Energy.To avoid repeating, details are not described herein again.
In one embodiment, a kind of computer readable storage medium is provided, computer program, computer are stored thereon with
Facial feature points detection method in above method embodiment is realized when program is executed by processor, alternatively, the computer program quilt
The function of each module/unit in facial feature points detection device in above-mentioned apparatus embodiment is realized when processor executes.To avoid
It repeats, details are not described herein again.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality
Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of facial feature points detection method, which is characterized in that the facial feature points detection method includes:
Obtain sample data set, wherein the sample data set includes face samples pictures and each face samples pictures
Human face characteristic point markup information;
According to preset division proportion, the sample data set is divided into training dataset and test data set;
Initial Face detection model is trained using the training dataset, the Face datection model trained,
In, the Initial Face detection model is the convolutional neural networks comprising K parallel-convolution layer, splicing layer and global pool layer,
Each parallel-convolution layer has the visual perception range of different default scales, and K is the positive integer more than or equal to 3;
The Face datection model trained is tested using the test data set, and according to test result calculations institute
The Face datection model trained is stated to the locating accuracy of human face characteristic point;
If the locating accuracy is less than preset accuracy rate threshold value, the face sample that the sample data is concentrated again
This picture is divided, and obtains new training dataset and new test data set, and use the new training dataset pair
The Face datection model trained is trained, to update the Face datection model trained, using described new
Test data set tests the Face datection model trained, until the locating accuracy is more than or equal to institute
Until stating preset accuracy rate threshold value;
If the locating accuracy is greater than or equal to the preset accuracy rate threshold value, locating accuracy is greater than or equal to institute
The Face datection model trained for stating preset accuracy rate threshold value is determined as trained Face datection model;
Obtain face picture to be detected;
The face picture to be detected is inputted the trained Face datection model to calculate, obtains the face figure
The characteristic point prediction result of piece, wherein the characteristic point prediction result includes the attribute information and location information of target feature point.
2. facial feature points detection method as described in claim 1, which is characterized in that K is equal to 3, and the K parallel volumes
Lamination includes the first convolutional layer, the second convolutional layer and third convolutional layer, it is described will be described in the face picture to be detected inputs
Trained Face datection model is calculated, and the characteristic point prediction result for obtaining the face picture includes:
The face picture to be detected is standardized, the first human face data is obtained;
First human face data is inputted into first convolutional layer and carries out convolutional calculation, obtains the first convolution results;
The standardization is carried out to first convolution results, obtains the second human face data;
Second human face data is inputted into second convolutional layer and carries out convolutional calculation, obtains the second convolution results;
The standardization is carried out to second convolution results, obtains third human face data;
The third human face data is inputted into the third convolutional layer and carries out convolutional calculation, obtains third convolution results;
First convolution results, second convolution results and the third convolution results are inputted the splicing layer to spell
Calculating is connect, convolution output result is obtained;
Convolution output result is inputted into the global pool layer and carries out pondization calculating, obtains the characteristic point prediction result.
3. facial feature points detection method as described in claim 1, which is characterized in that described according to test result calculations
The Face datection model trained includes: to the locating accuracy of human face characteristic point
According to the test result, calculates the corresponding test data of the test result and concentrate the normalization of each test sample flat
Equal error;
Preset error threshold is subjected to average segmentation according to preset interval numerical value, obtains P sub- threshold values, wherein P is positive whole
Number;
The statistical magnitude that the normalization mean error is less than the test sample of each sub- threshold value is counted, and calculates the system
Count number accounts for the percentage that the corresponding test data of the test result concentrates test sample sum, obtains P percentages;
The average value of P percentages is calculated, and using the average value as the locating accuracy.
4. facial feature points detection method as described in any one of claims 1 to 3, which is characterized in that the acquisition sample number
Include: according to collection
Obtain video data and picture;
Frequency and preset maximum frame number are extracted according to preset frame, target video frame image is extracted from the video data;
Human face characteristic point mark is carried out to the target video frame image and the picture respectively, respectively obtains the target video
The human face characteristic point markup information of frame image and the human face characteristic point markup information of the picture;
According to preset processing method, the picture is processed, the face for obtaining new picture and the new picture is special
Sign point markup information;
It regard the target video frame image, the picture and the new picture as the face samples pictures.
5. facial feature points detection method as claimed in claim 4, which is characterized in that it is described according to preset processing method,
The picture is processed, the human face characteristic point markup information for obtaining new picture and the new picture includes:
Flip horizontal processing is carried out to the picture, obtains the human face characteristic point mark letter of the first picture and first picture
Breath;
According to preset rotation mode, rotation processing is carried out to the picture and first picture respectively, obtains second picture
With the human face characteristic point markup information of the second picture;
According to preset offset and preset scaling, respectively to the picture, first picture and second figure
Face rectangle frame in piece successively carries out translation processing and scaling processing, and the face for obtaining third picture and the third picture is special
Sign point markup information;
According to preset withdrawal ratio, from the picture, first picture, the second picture and the third picture with
Machine chooses Target Photo, and carries out random brightness change process to the Target Photo, obtains the 4th picture and the 4th figure
The human face characteristic point markup information of piece;
It regard first picture, the second picture, the third picture and the 4th picture as the new picture.
6. a kind of facial feature points detection device, which is characterized in that the facial feature points detection device includes:
First obtains module, for obtaining sample data set, wherein the sample data set is comprising face samples pictures and each
The human face characteristic point markup information of the face samples pictures;
Sample division module, for according to preset division proportion, the sample data set to be divided into training dataset and survey
Try data set;
Model training module was trained for being trained using the training dataset to Initial Face detection model
Face datection model, wherein the Initial Face detection model be include K parallel-convolution layer, splice layer and global pool
The convolutional neural networks of layer, each parallel-convolution layer have a visual perception ranges of different default scales, K be more than or equal to
3 positive integer;
Model measurement module, for being tested using the test data set the Face datection model trained, and
Locating accuracy of the Face datection model trained according to test result calculations to human face characteristic point;
Model optimization module, if being less than preset accuracy rate threshold value for the locating accuracy, again to the sample number
It is divided according to the face samples pictures of concentration, obtains new training dataset and new test data set, and use institute
It states new training dataset to be trained the Face datection model trained, to update the Face datection trained
Model tests the Face datection model trained using the new test data set, until the positioning is quasi-
Until true rate is more than or equal to the preset accuracy rate threshold value;
Training result module will positioning if being greater than or equal to the preset accuracy rate threshold value for the locating accuracy
The Face datection model trained that accuracy rate is greater than or equal to the preset accuracy rate threshold value is determined as trained
Face datection model;
Second obtains module, for obtaining face picture to be detected;
Model prediction module, based on carrying out the face picture input to be detected trained Face datection model
It calculates, obtains the characteristic point prediction result of the face picture, wherein the characteristic point prediction result includes the category of target feature point
Property information and location information.
7. facial feature points detection device as claimed in claim 6, which is characterized in that K is equal to 3, and the K parallel volumes
Lamination includes the first convolutional layer, the second convolutional layer and third convolutional layer, and the model prediction module includes:
First normalizer module obtains the first face number for being standardized to the face picture to be detected
According to;
First convolution computational submodule carries out convolutional calculation for first human face data to be inputted first convolutional layer,
Obtain the first convolution results;
Second normalizer module obtains the second face number for carrying out the standardization to first convolution results
According to;
Second convolution computational submodule carries out convolutional calculation for second human face data to be inputted second convolutional layer,
Obtain the second convolution results;
Third normalizer module obtains third face number for carrying out the standardization to second convolution results
According to;
Third convolutional calculation submodule carries out convolutional calculation for the third human face data to be inputted the third convolutional layer,
Obtain third convolution results;
Splice submodule, for inputting first convolution results, second convolution results and the third convolution results
The splicing layer carries out splicing calculating, obtains convolution output result;
Pond beggar's module carries out pondization calculating for convolution output result to be inputted the global pool layer, obtains described
Characteristic point prediction result.
8. facial feature points detection device as claimed in claims 6 or 7, which is characterized in that described first, which obtains module, includes:
Data acquisition submodule, for obtaining video data and picture;
Video frame extraction submodule, for extracting frequency and preset maximum frame number according to preset frame, from the video data
Middle extraction target video frame image;
Submodule is marked, for carrying out human face characteristic point mark to the target video frame image and the picture respectively, respectively
Obtain the human face characteristic point markup information of the target video frame image and the human face characteristic point markup information of the picture;
Picture processes submodule, for according to preset processing method, is processed to the picture, obtain new picture and
The human face characteristic point markup information of the new picture;
Sample augmentation submodule, for regarding the target video frame image, the picture and the new picture as the people
Face samples pictures.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor
The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to
The step of any one of 5 facial feature points detection method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In realization facial feature points detection method as described in any one of claim 1 to 5 when the computer program is executed by processor
The step of.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810963841.0A CN109389030B (en) | 2018-08-23 | 2018-08-23 | Face characteristic point detection method and device, computer equipment and storage medium |
PCT/CN2018/120857 WO2020037898A1 (en) | 2018-08-23 | 2018-12-13 | Face feature point detection method and apparatus, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810963841.0A CN109389030B (en) | 2018-08-23 | 2018-08-23 | Face characteristic point detection method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109389030A true CN109389030A (en) | 2019-02-26 |
CN109389030B CN109389030B (en) | 2022-11-29 |
Family
ID=65418558
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810963841.0A Active CN109389030B (en) | 2018-08-23 | 2018-08-23 | Face characteristic point detection method and device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109389030B (en) |
WO (1) | WO2020037898A1 (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188627A (en) * | 2019-05-13 | 2019-08-30 | 睿视智觉(厦门)科技有限公司 | A kind of facial image filter method and device |
CN110222726A (en) * | 2019-05-15 | 2019-09-10 | 北京字节跳动网络技术有限公司 | Image processing method, device and electronic equipment |
CN110321807A (en) * | 2019-06-13 | 2019-10-11 | 南京行者易智能交通科技有限公司 | A kind of convolutional neural networks based on multilayer feature fusion are yawned Activity recognition method and device |
CN110363768A (en) * | 2019-08-30 | 2019-10-22 | 重庆大学附属肿瘤医院 | A kind of early carcinoma lesion horizon prediction auxiliary system based on deep learning |
CN110363077A (en) * | 2019-06-05 | 2019-10-22 | 平安科技(深圳)有限公司 | Sign Language Recognition Method, device, computer installation and storage medium |
CN110502432A (en) * | 2019-07-23 | 2019-11-26 | 平安科技(深圳)有限公司 | Intelligent test method, device, equipment and readable storage medium storing program for executing |
CN110705598A (en) * | 2019-09-06 | 2020-01-17 | 中国平安财产保险股份有限公司 | Intelligent model management method and device, computer equipment and storage medium |
CN110728968A (en) * | 2019-10-14 | 2020-01-24 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio accompaniment information evaluation method and device and storage medium |
CN110929635A (en) * | 2019-11-20 | 2020-03-27 | 华南理工大学 | False face video detection method and system based on face cross-over ratio under trust mechanism |
CN110955590A (en) * | 2019-10-15 | 2020-04-03 | 北京海益同展信息科技有限公司 | Interface detection method, image processing method, device, electronic equipment and storage medium |
CN111209812A (en) * | 2019-12-27 | 2020-05-29 | 深圳市优必选科技股份有限公司 | Target face picture extraction method and device and terminal equipment |
CN111368792A (en) * | 2020-03-18 | 2020-07-03 | 北京奇艺世纪科技有限公司 | Characteristic point mark injection molding type training method and device, electronic equipment and storage medium |
CN111680595A (en) * | 2020-05-29 | 2020-09-18 | 新疆爱华盈通信息技术有限公司 | Face recognition method and device and electronic equipment |
CN111783623A (en) * | 2020-06-29 | 2020-10-16 | 北京百度网讯科技有限公司 | Algorithm adjustment method, apparatus, device, and medium for recognizing positioning element |
CN111932593A (en) * | 2020-07-21 | 2020-11-13 | 湖南中联重科智能技术有限公司 | Image registration method, system and equipment based on touch screen gesture correction |
CN112528929A (en) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | Data labeling method and device, electronic equipment, medium and product |
WO2021057062A1 (en) * | 2019-09-23 | 2021-04-01 | 平安科技(深圳)有限公司 | Method and apparatus for optimizing attractiveness judgment model, electronic device, and storage medium |
CN112668573A (en) * | 2020-12-25 | 2021-04-16 | 平安科技(深圳)有限公司 | Target detection position reliability determination method and device, electronic equipment and storage medium |
CN112733531A (en) * | 2020-12-15 | 2021-04-30 | 平安银行股份有限公司 | Virtual resource allocation method and device, electronic equipment and computer storage medium |
CN112870665A (en) * | 2021-02-04 | 2021-06-01 | 太原理工大学 | Basketball ball control training instrument and control method thereof |
CN113065422A (en) * | 2021-03-19 | 2021-07-02 | 北京达佳互联信息技术有限公司 | Training method of video target detection model and video target detection method and device |
CN113496174A (en) * | 2020-04-07 | 2021-10-12 | 北京君正集成电路股份有限公司 | Method for improving recall rate and accuracy rate of three-level cascade detection |
CN113946703A (en) * | 2021-10-20 | 2022-01-18 | 天翼数字生活科技有限公司 | Picture omission processing method and related device thereof |
WO2022062403A1 (en) * | 2020-09-28 | 2022-03-31 | 平安科技(深圳)有限公司 | Expression recognition model training method and apparatus, terminal device and storage medium |
CN116844646A (en) * | 2023-09-04 | 2023-10-03 | 鲁东大学 | Enzyme function prediction method based on deep contrast learning |
CN117333928A (en) * | 2023-12-01 | 2024-01-02 | 深圳市宗匠科技有限公司 | Face feature point detection method and device, electronic equipment and storage medium |
CN111783623B (en) * | 2020-06-29 | 2024-04-12 | 北京百度网讯科技有限公司 | Algorithm adjustment method, device, equipment and medium for identifying positioning element |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368731B (en) * | 2020-03-04 | 2023-06-09 | 上海东普信息科技有限公司 | Silence living body detection method, silence living body detection device, silence living body detection equipment and storage medium |
CN111368758B (en) * | 2020-03-09 | 2023-05-23 | 苏宁云计算有限公司 | Face ambiguity detection method, face ambiguity detection device, computer equipment and storage medium |
CN111539248B (en) * | 2020-03-10 | 2023-05-05 | 西安电子科技大学 | Infrared face detection method and device and electronic equipment thereof |
CN113496173B (en) * | 2020-04-07 | 2023-09-26 | 北京君正集成电路股份有限公司 | Detection method of last stage of cascaded face detection |
CN111539600B (en) * | 2020-04-07 | 2023-09-01 | 北京航天自动控制研究所 | Neural network target detection stability evaluation method based on test |
CN111401314B (en) * | 2020-04-10 | 2023-06-13 | 上海东普信息科技有限公司 | Dressing information detection method, device, equipment and storage medium |
CN111462108B (en) * | 2020-04-13 | 2023-05-02 | 山西新华防化装备研究院有限公司 | Machine learning-based head-face product design ergonomics evaluation operation method |
CN113761983B (en) * | 2020-06-05 | 2023-08-22 | 杭州海康威视数字技术股份有限公司 | Method and device for updating human face living body detection model and image acquisition equipment |
CN111881746B (en) * | 2020-06-23 | 2024-04-02 | 安徽清新互联信息科技有限公司 | Face feature point positioning method and system based on information fusion |
CN111860195B (en) * | 2020-06-25 | 2024-03-01 | 广州珠江商业经营管理有限公司 | Security detection method and security detection device based on big data |
CN111917740B (en) * | 2020-07-15 | 2022-08-26 | 杭州安恒信息技术股份有限公司 | Abnormal flow alarm log detection method, device, equipment and medium |
CN111862040B (en) * | 2020-07-20 | 2023-10-31 | 中移(杭州)信息技术有限公司 | Portrait picture quality evaluation method, device, equipment and storage medium |
CN111832522B (en) * | 2020-07-21 | 2024-02-27 | 深圳力维智联技术有限公司 | Face data set construction method, system and computer readable storage medium |
CN112101105B (en) * | 2020-08-07 | 2024-04-09 | 深圳数联天下智能科技有限公司 | Training method and device for human face key point detection model and storage medium |
CN112767303B (en) * | 2020-08-12 | 2023-11-28 | 腾讯科技(深圳)有限公司 | Image detection method, device, equipment and computer readable storage medium |
CN112101121A (en) * | 2020-08-19 | 2020-12-18 | 深圳数联天下智能科技有限公司 | Face sensitivity identification method and device, storage medium and computer equipment |
CN112200236B (en) * | 2020-09-30 | 2023-08-11 | 网易(杭州)网络有限公司 | Training method of face parameter identification model and face parameter identification method |
CN112232236B (en) * | 2020-10-20 | 2024-02-06 | 城云科技(中国)有限公司 | Pedestrian flow monitoring method, system, computer equipment and storage medium |
CN112348791B (en) * | 2020-11-04 | 2023-03-14 | 中冶赛迪信息技术(重庆)有限公司 | Intelligent scrap steel detecting and judging method, system, medium and terminal based on machine vision |
CN112884705A (en) * | 2021-01-06 | 2021-06-01 | 西北工业大学 | Two-dimensional material sample position visualization method |
CN113609900B (en) * | 2021-06-25 | 2023-09-12 | 南京信息工程大学 | Face positioning method and device for local generation, computer equipment and storage medium |
CN113780145A (en) * | 2021-09-06 | 2021-12-10 | 苏州贝康智能制造有限公司 | Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium |
CN114267069A (en) * | 2021-12-25 | 2022-04-01 | 福州大学 | Human face detection method based on data generalization and feature enhancement |
CN115937958B (en) * | 2022-12-01 | 2023-12-15 | 北京惠朗时代科技有限公司 | Blink detection method, blink detection device, blink detection equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030101012A1 (en) * | 2001-08-24 | 2003-05-29 | Bio-Rad Laboratories, Inc. | Biometric quality control process |
US20120300090A1 (en) * | 2011-05-23 | 2012-11-29 | Ziv Aviv | Fast face detection technique |
US20170213359A1 (en) * | 2016-01-27 | 2017-07-27 | Samsung Electronics Co., Ltd. | Method and apparatus for positioning feature point |
CN107403141A (en) * | 2017-07-05 | 2017-11-28 | 中国科学院自动化研究所 | Method for detecting human face and device, computer-readable recording medium, equipment |
CN107423690A (en) * | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
CN107633265A (en) * | 2017-09-04 | 2018-01-26 | 深圳市华傲数据技术有限公司 | For optimizing the data processing method and device of credit evaluation model |
CN107871099A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and apparatus |
CN108229268A (en) * | 2016-12-31 | 2018-06-29 | 商汤集团有限公司 | Expression Recognition and convolutional neural networks model training method, device and electronic equipment |
CN108319908A (en) * | 2018-01-26 | 2018-07-24 | 华中科技大学 | A kind of untethered environment method for detecting human face based on Pixel-level Differential Characteristics |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105389573B (en) * | 2015-12-23 | 2019-03-26 | 山东大学 | A kind of face identification method based on three value mode layering manufactures of part |
CN106951840A (en) * | 2017-03-09 | 2017-07-14 | 北京工业大学 | A kind of facial feature points detection method |
-
2018
- 2018-08-23 CN CN201810963841.0A patent/CN109389030B/en active Active
- 2018-12-13 WO PCT/CN2018/120857 patent/WO2020037898A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030101012A1 (en) * | 2001-08-24 | 2003-05-29 | Bio-Rad Laboratories, Inc. | Biometric quality control process |
US20120300090A1 (en) * | 2011-05-23 | 2012-11-29 | Ziv Aviv | Fast face detection technique |
US20170213359A1 (en) * | 2016-01-27 | 2017-07-27 | Samsung Electronics Co., Ltd. | Method and apparatus for positioning feature point |
CN107871099A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | Face detection method and apparatus |
CN108229268A (en) * | 2016-12-31 | 2018-06-29 | 商汤集团有限公司 | Expression Recognition and convolutional neural networks model training method, device and electronic equipment |
CN107423690A (en) * | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
CN107403141A (en) * | 2017-07-05 | 2017-11-28 | 中国科学院自动化研究所 | Method for detecting human face and device, computer-readable recording medium, equipment |
CN107633265A (en) * | 2017-09-04 | 2018-01-26 | 深圳市华傲数据技术有限公司 | For optimizing the data processing method and device of credit evaluation model |
CN108319908A (en) * | 2018-01-26 | 2018-07-24 | 华中科技大学 | A kind of untethered environment method for detecting human face based on Pixel-level Differential Characteristics |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188627A (en) * | 2019-05-13 | 2019-08-30 | 睿视智觉(厦门)科技有限公司 | A kind of facial image filter method and device |
CN110188627B (en) * | 2019-05-13 | 2021-11-23 | 睿视智觉(厦门)科技有限公司 | Face image filtering method and device |
CN110222726A (en) * | 2019-05-15 | 2019-09-10 | 北京字节跳动网络技术有限公司 | Image processing method, device and electronic equipment |
CN110363077A (en) * | 2019-06-05 | 2019-10-22 | 平安科技(深圳)有限公司 | Sign Language Recognition Method, device, computer installation and storage medium |
CN110321807A (en) * | 2019-06-13 | 2019-10-11 | 南京行者易智能交通科技有限公司 | A kind of convolutional neural networks based on multilayer feature fusion are yawned Activity recognition method and device |
CN110502432A (en) * | 2019-07-23 | 2019-11-26 | 平安科技(深圳)有限公司 | Intelligent test method, device, equipment and readable storage medium storing program for executing |
CN110502432B (en) * | 2019-07-23 | 2023-11-28 | 平安科技(深圳)有限公司 | Intelligent test method, device, equipment and readable storage medium |
CN110363768A (en) * | 2019-08-30 | 2019-10-22 | 重庆大学附属肿瘤医院 | A kind of early carcinoma lesion horizon prediction auxiliary system based on deep learning |
CN110363768B (en) * | 2019-08-30 | 2021-08-17 | 重庆大学附属肿瘤医院 | Early cancer focus range prediction auxiliary system based on deep learning |
CN110705598A (en) * | 2019-09-06 | 2020-01-17 | 中国平安财产保险股份有限公司 | Intelligent model management method and device, computer equipment and storage medium |
WO2021057062A1 (en) * | 2019-09-23 | 2021-04-01 | 平安科技(深圳)有限公司 | Method and apparatus for optimizing attractiveness judgment model, electronic device, and storage medium |
CN110728968A (en) * | 2019-10-14 | 2020-01-24 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio accompaniment information evaluation method and device and storage medium |
CN110955590A (en) * | 2019-10-15 | 2020-04-03 | 北京海益同展信息科技有限公司 | Interface detection method, image processing method, device, electronic equipment and storage medium |
CN110929635B (en) * | 2019-11-20 | 2023-02-10 | 华南理工大学 | False face video detection method and system based on face cross-over ratio under trust mechanism |
CN110929635A (en) * | 2019-11-20 | 2020-03-27 | 华南理工大学 | False face video detection method and system based on face cross-over ratio under trust mechanism |
CN111209812B (en) * | 2019-12-27 | 2023-09-12 | 深圳市优必选科技股份有限公司 | Target face picture extraction method and device and terminal equipment |
CN111209812A (en) * | 2019-12-27 | 2020-05-29 | 深圳市优必选科技股份有限公司 | Target face picture extraction method and device and terminal equipment |
CN111368792A (en) * | 2020-03-18 | 2020-07-03 | 北京奇艺世纪科技有限公司 | Characteristic point mark injection molding type training method and device, electronic equipment and storage medium |
CN113496174B (en) * | 2020-04-07 | 2024-01-23 | 北京君正集成电路股份有限公司 | Method for improving recall rate and accuracy rate of three-stage cascade detection |
CN113496174A (en) * | 2020-04-07 | 2021-10-12 | 北京君正集成电路股份有限公司 | Method for improving recall rate and accuracy rate of three-level cascade detection |
CN111680595A (en) * | 2020-05-29 | 2020-09-18 | 新疆爱华盈通信息技术有限公司 | Face recognition method and device and electronic equipment |
CN111783623B (en) * | 2020-06-29 | 2024-04-12 | 北京百度网讯科技有限公司 | Algorithm adjustment method, device, equipment and medium for identifying positioning element |
CN111783623A (en) * | 2020-06-29 | 2020-10-16 | 北京百度网讯科技有限公司 | Algorithm adjustment method, apparatus, device, and medium for recognizing positioning element |
CN111932593B (en) * | 2020-07-21 | 2024-04-09 | 湖南中联重科智能技术有限公司 | Image registration method, system and equipment based on touch screen gesture correction |
CN111932593A (en) * | 2020-07-21 | 2020-11-13 | 湖南中联重科智能技术有限公司 | Image registration method, system and equipment based on touch screen gesture correction |
WO2022062403A1 (en) * | 2020-09-28 | 2022-03-31 | 平安科技(深圳)有限公司 | Expression recognition model training method and apparatus, terminal device and storage medium |
CN112733531A (en) * | 2020-12-15 | 2021-04-30 | 平安银行股份有限公司 | Virtual resource allocation method and device, electronic equipment and computer storage medium |
CN112733531B (en) * | 2020-12-15 | 2023-08-18 | 平安银行股份有限公司 | Virtual resource allocation method and device, electronic equipment and computer storage medium |
CN112528929A (en) * | 2020-12-22 | 2021-03-19 | 北京百度网讯科技有限公司 | Data labeling method and device, electronic equipment, medium and product |
CN112668573B (en) * | 2020-12-25 | 2022-05-10 | 平安科技(深圳)有限公司 | Target detection position reliability determination method and device, electronic equipment and storage medium |
CN112668573A (en) * | 2020-12-25 | 2021-04-16 | 平安科技(深圳)有限公司 | Target detection position reliability determination method and device, electronic equipment and storage medium |
CN112870665A (en) * | 2021-02-04 | 2021-06-01 | 太原理工大学 | Basketball ball control training instrument and control method thereof |
CN113065422A (en) * | 2021-03-19 | 2021-07-02 | 北京达佳互联信息技术有限公司 | Training method of video target detection model and video target detection method and device |
CN113946703A (en) * | 2021-10-20 | 2022-01-18 | 天翼数字生活科技有限公司 | Picture omission processing method and related device thereof |
CN116844646A (en) * | 2023-09-04 | 2023-10-03 | 鲁东大学 | Enzyme function prediction method based on deep contrast learning |
CN116844646B (en) * | 2023-09-04 | 2023-11-24 | 鲁东大学 | Enzyme function prediction method based on deep contrast learning |
CN117333928A (en) * | 2023-12-01 | 2024-01-02 | 深圳市宗匠科技有限公司 | Face feature point detection method and device, electronic equipment and storage medium |
CN117333928B (en) * | 2023-12-01 | 2024-03-22 | 深圳市宗匠科技有限公司 | Face feature point detection method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109389030B (en) | 2022-11-29 |
WO2020037898A1 (en) | 2020-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109389030A (en) | Facial feature points detection method, apparatus, computer equipment and storage medium | |
US10262190B2 (en) | Method, system, and computer program product for recognizing face | |
Sharma | Information Measure Computation and its Impact in MI COCO Dataset | |
CN109583445A (en) | Character image correction processing method, device, equipment and storage medium | |
CN104794504B (en) | Pictorial pattern character detecting method based on deep learning | |
CN109492643A (en) | Certificate recognition methods, device, computer equipment and storage medium based on OCR | |
CN110309706A (en) | Face critical point detection method, apparatus, computer equipment and storage medium | |
CN109241904A (en) | Text region model training, character recognition method, device, equipment and medium | |
CN109271870A (en) | Pedestrian recognition methods, device, computer equipment and storage medium again | |
CN108985232A (en) | Facial image comparison method, device, computer equipment and storage medium | |
CN109214273A (en) | Facial image comparison method, device, computer equipment and storage medium | |
CN105303179A (en) | Fingerprint identification method and fingerprint identification device | |
CN105809651B (en) | Image significance detection method based on the comparison of edge non-similarity | |
CN106022317A (en) | Face identification method and apparatus | |
CN106203242A (en) | A kind of similar image recognition methods and equipment | |
CN110427970A (en) | Image classification method, device, computer equipment and storage medium | |
CN109389038A (en) | A kind of detection method of information, device and equipment | |
CN107609519A (en) | The localization method and device of a kind of human face characteristic point | |
CN109598234A (en) | Critical point detection method and apparatus | |
WO2021238548A1 (en) | Region recognition method, apparatus and device, and readable storage medium | |
CN109993021A (en) | The positive face detecting method of face, device and electronic equipment | |
CN108960145A (en) | Facial image detection method, device, storage medium and electronic equipment | |
CN110147833A (en) | Facial image processing method, apparatus, system and readable storage medium storing program for executing | |
CN111598899A (en) | Image processing method, image processing apparatus, and computer-readable storage medium | |
CN107871103A (en) | Face authentication method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |