CN110399844A - It is a kind of to be identified and method for tracing and system applied to cross-platform face key point - Google Patents
It is a kind of to be identified and method for tracing and system applied to cross-platform face key point Download PDFInfo
- Publication number
- CN110399844A CN110399844A CN201910688610.8A CN201910688610A CN110399844A CN 110399844 A CN110399844 A CN 110399844A CN 201910688610 A CN201910688610 A CN 201910688610A CN 110399844 A CN110399844 A CN 110399844A
- Authority
- CN
- China
- Prior art keywords
- face
- key point
- facial image
- frame
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of applied to the cross-platform identification of face key point and method for tracing and system, and wherein method and step includes: that collection facial image, the key point of every facial image of label first makes facial image training sample set;Based on multitask convolutional neural networks algorithm, training obtains multitask convolutional neural networks model;Acquisition facial image simultaneously pre-processes facial image, is loaded into multitask convolutional neural networks model, reads current frame image and synchronizes acquisition human face region and corresponding face key point location information;It is inputted using the key point information of first frame facial image as present frame, the face key point information of present frame is calculated by multitask convolutional neural networks model, and determine whether present frame facial image key point is in the successful state of tracking;After finally adding up the face key point information of tracking expected quantity, the Eulerian angles of face are calculated by face deflection angle computation model trained in advance, complete face posture estimation.
Description
Technical field
The present invention relates to field of artificial intelligence more particularly to it is a kind of applied to cross-platform face key point identification with
Method for tracing and system.
Background technique
Existing Face datection tracing system takes the open source such as OpenCV or MTCNN library to realize one is traditional algorithm is based on,
Another kind is Some Enterprises from the face critical point detection tracking ground, and machine learning mode is based on, using labeled data, training mark
Data Data is infused, obtains the method for model realize detection tracking.
At present technology there are the shortcomings that:
The disadvantages of only relying on the algorithm that OpenCV or MTCNN open source library is realized that there are recognition speeds be slow, and tracking key point is few,
And third party increases income, and library security performance is low, and back door is more, be easy to cause threat.In the human face region detection of this kind of system, deposit
The accuracy rate that the problems such as brightness of image, exposure range, contrast causes human face region to detect in because of video is low, or because of video figure
As the problem of size causes human face region to detect the problem of time-consuming, low efficiency.
It is based only on machine learning mode, the face key point number recognized is influenced to be limited to certain by training pattern early period
One value, can not calculate out more plurality of human faces key point data by identified key point, and there are not flexible, what can not be extended is asked
Topic.
Training algorithm based on machine learning, the accuracy of data set will affect the accuracy that training obtains model, original
Data set is all to use manually to mark in the industry at present, and error rate is very big.
At present the recognition of face tracing system of market obstructed ethnic group, black skin quality, wear glasses, cap when, know
Rate can not reduce by 50% or more, and cannot achieve tracking.
In traditional tracing system, the method for improving robustness is only limitted to the use of Kalman filtering, will cause present frame knowledge
The face key point being clipped to postpones after the processing of Kalman filter.
Summary of the invention
The purpose of the present invention is to provide a kind of applied to cross-platform face key point identification and method for tracing and system,
Image preprocessing is carried out before detection, promotes accuracy rate;Human face region detection and face key point are identified into sameization, avoid counting
According to the loss of significance transmitted between human face region detection and the identification of face key point, accuracy rate is promoted;In tracing process only
It needs to carry out human face region detection in first frame, be believed in subsequent face key point identification process using the key point of previous frame
The input as present frame is ceased, saves and calculates the time, improves system effectiveness;Strong robustness is carried out after identification or tracing computation
It extends and calculates with face key point, promote the stability of face key point identification, improve face key point and identify number.
To achieve the above object, technical scheme is as follows:
It is a kind of applied to cross-platform face key point identification and method for tracing, include the following steps:
Step 1 collects facial image, marks the key point of every facial image, makes facial image training sample set;
Step 2 is based on multitask convolutional neural networks algorithm, carries out human face region detection to every image and identifies people
The training of face key point obtains trained multitask convolutional neural networks model;
Step 3 acquires facial image and pre-processes to facial image, is loaded into multitask convolutional neural networks model, reads
Current frame image and synchronous acquisition human face region and corresponding face key point location information;
Step 4 is inputted using the key point information of first frame facial image as present frame, and multitask convolution mind is passed through
Through network model calculate present frame face key point information, and determine present frame facial image key point whether be in tracking at
The state of function;
Step 5 after adding up the face key point information of tracking expected quantity, hands over meter by face deflection trained in advance
It calculates model to calculate the Eulerian angles of face, completes face posture estimation.
Wherein, facial image pretreatment includes scaled image in step 3, acquisition brightness of image, exposure, clear
Clear degree and contrast are simultaneously adjusted to optimum value.
Further, the multitask convolutional network model is divided into P-Net, R-Net and O-Net Three Tiered Network Architecture;P-
Net network is for being detected as facial image region and quickly generating face candidate frame;R-Net network is poor for filtration result
Face candidate frame and carry out that frame returns and the crucial spot locator of face carries out the frame of human face region to selected candidate frame
Recurrence and crucial point location, export human face region with a high credibility;O-Net network carries out the recurrence of human face region frame and people again
Face features localization exports five characteristic points of the top left co-ordinate of human face region, bottom right angular coordinate and human face region.
Stratification Kalman filtering algorithm is carried out in step 4 after identifying present frame facial image key point to close face
After key point carries out strong robustness processing, then pass through whether SVM support vector machine method determines present frame facial image key point
In the successful state of tracking.
When adding up tracking face key point in step 5 to 68,68 key points are associated with based on triangulation and are extended
Three arms of angle of specific key point composition, form focus, and focus position is the human face or face peripheral location estimated
Key point, until extension calculation face key point to 108.
Further, the specific of robustness intensive treatment is carried out to face key point using stratification Kalman filtering algorithm
Method are as follows: face key point is divided into face outer profile 0-16 point and face Internal periphery 17-67 point, face outer profile utilizes
The memory headroom storage of 2 difference m, n times of face shape size track successfully nearest m, n frame face shape coordinate, setting
Beginning flag position, wherein 1≤n < m≤100;Using effective n of storage, m frame face shape coordinate information and Kalman filter
Currently available shape coordinate is filtered and using filtered face shape coordinate as the true coordinate of present frame
Output.
In step 2 before multitask convolutional neural networks model training, be added Linear SVM algorithm by face key point with
The criterion whether face is aligned stores in a model as a file format, when carrying out face key point tracking, for determining
Whether present frame tracks success.
It is a kind of applied to cross-platform face key point identification and tracing system, it is characterised in that: including Face datection with
Key point identification model training module, face image processing module and face information computing module;
The Face datection and key point identification model training module, the facial image based on collection simultaneously mark key point,
It is trained using multitask convolutional neural networks algorithm, obtains multitask convolutional neural networks model;
The face image processing module, including image repair and pretreatment unit, human face region and key point identification list
Member and human face region and key point tracing unit;
Described image reparation and pretreatment unit acquire facial image to the scaled image of facial image to be detected
Brightness, exposure, clarity and contrast are simultaneously adjusted to optimum value;
The human face region and key point recognition unit read the people to be detected of input multitask convolutional neural networks model
The present frame of face image, it is preliminary to obtain human face region and corresponding face key point location information;
The human face region and key point tracing unit, based on the key point information of previous frame as the defeated of present frame
Enter, determines that present frame face key point is enough in successful state is tracked, until the face of accumulative tracking to expected quantity closes
Key point;
The face information computing module, by face deflection trained in advance hand over computation model to the Eulerian angles of face into
Row calculates, and completes face posture estimation.
Further, the face image processing module further includes that face key point robustness strengthens unit, using layering
Change Kalman filtering algorithm and robustness intensive treatment is carried out to present frame facial image key point.
Further, the face image processing module includes face key point calculation unit, is based on triangulation
It is associated with each key point and extends three arms of angle of specific key point composition, form focus, focus position is the face estimated
The key point of organ or face peripheral location extends face key point.
Of the invention is applied to the cross-platform identification of face key point and method for tracing and system, has the advantages that
Recognition accuracy is promoted: using image correction and pretreatment is carried out before detection, detection image to be detected includes ruler
Parameter information including very little, brightness, exposure, contrast repairs the value that these parameters of image are detected in optimum, utilizes pre- place
Reason promotes recognition accuracy;
Secondly human face region detection and face key point are identified into sameization, avoids data in human face region detection and face
The loss of significance transmitted between key point identification, promotes accuracy rate.
Lifting system efficiency, stability, availability: it only needs to carry out human face region inspection in first frame in tracing process
It surveys, meter is saved in input of the key point information in subsequent face key point identification process using previous frame as present frame
Evaluation time improves system effectiveness.Strong robustness and the extension calculation of face key point are carried out after identification or tracing computation, are promoted
Face key point identification stability, and by training pattern can improve face key point identify number to 108,
To the availability of lifting system;
It is to be filtered operation for the face key point information of different location using stratified Kalman filter, it can
To reduce the calculation amount of filter, improving operational speed eliminates delay in the case where guaranteeing to eliminate shake, solves conventional method
In by Kalman filtering cause delay the problem of.
Detailed description of the invention
The attached drawing for constituting specification a part describes the embodiment of the present invention, and together with description for explaining this
Referring to attached drawing the present invention can be more clearly understood in the principle of invention:
Fig. 1 is the flow chart for being applied to cross-platform face key point identification and method for tracing in the embodiment of the present invention;
Fig. 2 is the pretreated flow chart of facial image in the embodiment of the present invention;
Fig. 3 is the functional block diagram for being applied to cross-platform face key point identification and tracing system in the embodiment of the present invention;
Fig. 4 is the output effect figure that multitask convolutional neural networks model is used in the embodiment of the present invention;
Fig. 5 is the output effect figure for expanding key point in the embodiment of the present invention using triangulation.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawings and examples.
Such as Fig. 1, of the invention is a kind of applied to cross-platform face key point identification and method for tracing, including walks as follows
It is rapid:
Step 1 collects facial image, marks the key point of every facial image, makes facial image training sample set;
Step 2 is based on multitask convolutional neural networks algorithm, carries out human face region detection to every image and identifies people
The training of face key point obtains trained multitask convolutional neural networks model;
Step 3 acquires facial image and pre-processes to facial image, is loaded into multitask convolutional neural networks model, reads
Current frame image and synchronous acquisition human face region and corresponding face key point location information;
Step 4 is inputted using the key point information of first frame facial image as present frame, and multitask convolution mind is passed through
The face key point information of present frame is calculated through network model, and present frame face figure is determined by SVM support vector machine method
As whether key point is in the successful state of tracking;
Step 5 after adding up the face key point information of tracking expected quantity, hands over meter by face deflection trained in advance
It calculates model to calculate the Eulerian angles of face, completes face posture estimation.
Before multitask convolutional neural networks model training, be added Linear SVM algorithm by face key point and face whether
The criterion of alignment stores in a model as a file format, when carrying out face key point tracking, for determining that present frame is
It is no to track successfully.
As shown in Fig. 2, facial image pretreatment includes scaled image, brightness of image, exposure, clarity are acquired
With contrast and be adjusted to optimum value.
As shown in figure 4, multitask convolutional network model is divided into P-Net, R-Net and O-Net Three Tiered Network Architecture;P-Net
Network is for being detected as facial image region and quickly generating face candidate frame;R-Net network is poor for filtration result
Face candidate frame simultaneously carries out the frame time that the crucial spot locator of frame recurrence and face carries out human face region to selected candidate frame
Return and crucial point location, exports human face region with a high credibility;O-Net network carries out the recurrence of human face region frame and face again
Feature location exports five characteristic points of the top left co-ordinate of human face region, bottom right angular coordinate and human face region.
Multitask convolutional neural networks (MTCNN), by human face region detection with the parallel processing of face critical point detection, it
Theme frame is totally divided into P-Net, R-Net and O-Net Three Tiered Network Architecture, the multitask nerve for Face datection task
Network model, and use candidate frame adds the thought of classifier, Face datection rapidly and efficiently.These three cascade networks are respectively
Quickly generate the P-Net of candidate window, progress high-precision candidate window filters the R-Net of selection and generates ultimate bound frame and people
The O-Net of face key point.Meanwhile multitask convolutional neural networks model also uses image pyramid, frame returns, non-maximum
Technologies, the detailed processes such as value inhibition are as follows:
It constructs image pyramid: image is carried out to the transformation of different scale first, image pyramid is constructed, to adapt to difference
The face of size detect.
P-Net, full name are Proposal Network, and fully-connected network constructs the image pyramid completed to previous step,
Carry out preliminary feature extraction and calibration frame by FCN, and adjustment window returned using frame and non-maximum value inhibit into
The filtering of row major part window.
P-Net is the region detection network of human face region, which passes through people for after three convolutional layers of feature input results
Face classifier judges whether the region is face, while being returned using frame and carrying out Preliminary detection people with the locator of facial key point
Face region, finally, there may be the human face regions of face for many of output, and input R-Net and be further processed.
R-Net, full name are Refine Network, and convolutional neural networks increase for the P-Net of first layer
One full articulamentum, thus it is stringenter for the screening of input data.In facial image after P-Net, it can leave perhaps
More prediction windows, the R-Net network face candidate frame poor for filtration result simultaneously carry out frame time to selected candidate frame
Return and the frame of the crucial spot locator progress human face region of face returns and crucial point location, exports face area with a high credibility
Domain.
The output of P-Net only will carry out input by R-Net with the possible human face region of certain confidence level
Refinement selection, and cast out most mistake input, it reuses frame recurrence and the crucial spot locator of face carries out face area
The frame in domain returns and crucial point location, will finally export more believable human face region and use for O-Net.Comparison and P-Net
Using the feature for the 1x1x32 that full convolution exports, R-Net use used after the last one convolutional layer one 128 entirely connect
Layer is connect, remains more characteristics of image, accuracy performance is also superior to P-Net.
O-Net, full name are Output Network, and more complicated convolutional neural networks are more for R-Net
One convolutional layer.The effect of O-Net and the difference of R-Net are that this layer of structure can identify face by more supervising
Region, and the face feature point of people is returned, five people's face face feature points of final output.
Using 256 full articulamentum, more characteristics of image are remained, while carrying out human face discriminating, human face region side again
Frame returns and extract facial feature, and five of the top left co-ordinate and bottom right angular coordinate of final output human face region and human face region
Characteristic point.O-Net possesses feature and more inputs and more complicated network structure, it may have better performance, this layer it is defeated
It is exported out as final network model.
MTCNN performance and accuracy rate in order to balance, avoid sliding window from adding the conventional thoughts bring such as classifier huge
Performance consumption is first generated the target area candidate frame for having certain probability using mini Mod, then reuses more complicated model
The regional frame for being finely divided class and higher precision returns, and recurrence executes building three-layer network, uses image gold in input layer
Word tower carries out the change of scale of initial pictures, and generates a large amount of candidate target region frame using P-Net, uses R-Net later
Selected for the first time and frame is carried out to these target area frames to return, and most negative example is excluded, then again with more complicated, smart
It spends higher network O-Net and differentiation and the recurrence of region frame is carried out to remaining target area frame.
Stratification Kalman filtering algorithm is carried out after identifying present frame facial image key point to carry out face key point
After robustness intensive treatment, then passes through SVM support vector machine method and determine whether present frame facial image key point is in tracking
Successful state.
Conventional method sets all face key points by filtering algorithms such as Kalman filter and is filtered operation, with
The shake of key point is eliminated, but will lead to the calculated face key point information of present frame is the key that under preceding n frame face location
Point information, thus cause to postpone, and stratified Kalman filter is the face key point information progress for different location
Filtering operation.
Robustness intensive treatment is carried out to face key point using stratification Kalman filtering algorithm method particularly includes: will
Face key point is divided into face outer profile 0-16 point and face Internal periphery 17-67 point, and face outer profile utilizes 2 difference m,
The memory headroom storage of n times of face shape size tracks successfully nearest m, and beginning flag is arranged in n frame face shape coordinate
Position, wherein 1≤n < m≤100;Using effective n of storage, m frame face shape coordinate information and Kalman filter are obtained to current
To shape coordinate be filtered and using filtered face shape coordinate as present frame true coordinate export.Using
Stratification operation, it is possible to reduce the calculation amount of filter, improving operational speed are eliminated in the case where guaranteeing to eliminate shake and prolonged
Late.
In carrying out face key point robustness reinforced module, using least square SVM algorithm, the face of present frame is closed
Key clicks through the alignment amendment of pedestrian's face, allows the face key point of present frame closer to real human face position, mitigates the face of frame up and down
Shake of the key point because of the excessive generation of error deviation.
As shown in figure 5, being associated with 68 key points when accumulative tracking face key point is to 68 based on triangulation and prolonging
Three arms of angle for stretching specific key point composition, form focus, and focus position is position around the human face or face estimated
The key point set, until extension calculation face key point is to 108.By pre-set critical parameter, human face expression is calculated
Relevant information, such as: whether open one's mouth, whether close one's eyes, whether choose eyebrow, mouth of whether beeping, whether stick out one's tongue, realizing human face expression
Identification estimation.
It is a kind of applied to cross-platform face key point identification and tracing system, as shown in figure 3, including Face datection and closing
Key point identification model training module, face image processing module and face information computing module, and Face datection and key point are known
Other model training module and face information computing module are connected with face image processing module.
Wherein, Face datection and key point identification model training module, the facial image based on collection simultaneously mark key point,
It is trained using multitask convolutional neural networks algorithm, obtains multitask convolutional neural networks model;
Face image processing module, including image repair and pretreatment unit and with image repair and pretreatment unit successively
Connected human face region is strengthened with key point recognition unit, human face region and key point tracing unit, face key point robustness
Unit and face key point calculate unit.
Image repair and pretreatment unit, to the scaled image of facial image to be detected, acquire facial image brightness,
Exposure, clarity and contrast are simultaneously adjusted to optimum value.
Human face region and key point recognition unit read the face figure to be detected of input multitask convolutional neural networks model
The present frame of picture, it is preliminary to obtain human face region and corresponding face key point location information.
Human face region and key point tracing unit, the input based on the key point information of previous frame as present frame determine
Present frame face key point is enough in successful state is tracked, until face key point of the accumulative tracking to expected quantity.
Face key point robustness strengthens unit, crucial to present frame facial image using stratification Kalman filtering algorithm
Point carries out strong robustness processing.
Face key point calculates unit, is associated with each key point based on triangulation and extends specific key point composition
Three arms of angle form focus, and focus position is the human face estimated or the key point of face peripheral location, extend face
Key point.
Face information computing module hands over computation model to count the Eulerian angles of face by face deflection trained in advance
It calculates, completes face posture estimation.
The cross-platform face key point that is applied to of the invention identifies that the major function provided with tracing system system includes:
It can cross-platform (windows, mac, IOS, Android, Linux) use;Can each frame of real-time detection video image, identification figure
The key point location information of human face region and corresponding face as in;By the key point of human face region and corresponding 68 people of face
Location information calculates out around 40 faces and the key point location information of other corresponding organs;Finally estimate each frame of video
The face posture and expression that image detection arrives.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects
It is described in detail, it should be understood that the foregoing is merely a specific embodiment of the invention, the guarantor that is not intended to limit the present invention
Range is protected, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all be contained in this hair
Within bright protection scope.
Claims (10)
1. a kind of applied to cross-platform face key point identification and method for tracing, which comprises the steps of:
Step 1 collects facial image, marks the key point of every facial image, makes facial image training sample set;
Step 2 is based on multitask convolutional neural networks algorithm, carries out human face region detection to every image and identifies that face closes
The training of key point obtains trained multitask convolutional neural networks model;
Step 3 acquires facial image and pre-processes to facial image, be loaded into multitask convolutional neural networks model, reads current
Frame image and synchronous acquisition human face region and corresponding face key point location information;
Step 4 is inputted as present frame using the key point information of first frame facial image, passes through multitask convolutional Neural net
Network model calculates the face key point information of present frame, and it is successful to determine whether present frame facial image key point is in tracking
State;
Step 5 after adding up the face key point information of tracking expected quantity, is handed over by face deflection trained in advance and calculates mould
Type calculates the Eulerian angles of face, completes face posture estimation.
2. according to claim 1 applied to cross-platform face key point identification and method for tracing, it is characterised in that: step
In rapid three, facial image pretreatment includes scaled image, and acquisition brightness of image, exposure, clarity and contrast are simultaneously
It is adjusted to optimum value.
3. according to claim 1 applied to cross-platform face key point identification and method for tracing, it is characterised in that: institute
It states multitask convolutional network model and is divided into P-Net, R-Net and O-Net Three Tiered Network Architecture;P-Net network is people for detecting
Face image region simultaneously quickly generates face candidate frame;R-Net network is for the poor face candidate frame of filtration result and to choosing
Fixed candidate frame carries out frame recurrence and facial crucial spot locator carries out frame recurrence and the key point location of human face region, defeated
Human face region with a high credibility out;O-Net network carries out the recurrence of human face region frame and extract facial feature again, exports face
Five characteristic points of the top left co-ordinate in region, bottom right angular coordinate and human face region.
4. according to claim 1 applied to cross-platform face key point identification and method for tracing, it is characterised in that: step
In rapid four, stratification Kalman filtering algorithm is carried out after identifying present frame facial image key point, Shandong is carried out to face key point
After stick intensive treatment, then by SVM support vector machine method determine present frame facial image key point whether be in tracking at
The state of function.
5. according to claim 1 applied to cross-platform face key point identification and method for tracing, it is characterised in that: step
In rapid five, when adding up tracking face key point to 68,68 key points is associated with based on triangulation and extend specific key
Three arms of angle of point composition form focus, the key point of human face or face peripheral location that focus position is as estimated,
Until extension calculation face key point is to 108.
6. according to claim 4 applied to cross-platform face key point identification and method for tracing, it is characterised in that: adopt
Robustness intensive treatment is carried out to face key point with stratification Kalman filtering algorithm method particularly includes: by face key point
It is divided into face outer profile 0-16 point and face Internal periphery 17-67 point, face outer profile utilizes 2 difference m, n times of face
The memory headroom storage of shape size tracks successfully nearest m, and beginning flag position is arranged in n frame face shape coordinate, wherein 1≤
n<m≤100;Using effective n of storage, m frame face shape coordinate information and Kalman filter sit currently available shape
Mark is filtered and exports filtered face shape coordinate as the true coordinate of present frame.
7. according to claim 1 applied to cross-platform face key point identification and method for tracing, it is characterised in that: step
In rapid two, before multitask convolutional neural networks model training, Linear SVM algorithm is added whether face key point is right with face
Neat criterion stores in a model as a file format, when carrying out face key point tracking, for whether determining present frame
It tracks successfully.
8. a kind of applied to cross-platform face key point identification and tracing system, it is characterised in that: including Face datection and close
Key point identification model training module, face image processing module and face information computing module, the Face datection and key point
Identification model training module is connected with face image processing module, face image processing module and face information computing module phase
Even;
The Face datection and key point identification model training module, the facial image based on collection simultaneously mark key point, use
Multitask convolutional neural networks algorithm is trained, and obtains multitask convolutional neural networks model;
The face image processing module, including image repair and pretreatment unit and with described image reparation and pretreatment unit
Sequentially connected human face region and key point recognition unit and human face region and key point tracing unit;
Described image reparation and pretreatment unit, to the scaled image of facial image to be detected, acquire facial image brightness,
Exposure, clarity and contrast are simultaneously adjusted to optimum value;
The human face region and key point recognition unit read the face figure to be detected of input multitask convolutional neural networks model
The present frame of picture, it is preliminary to obtain human face region and corresponding face key point location information;
The human face region and key point tracing unit, the input based on the key point information of previous frame as present frame,
Determine that present frame face key point is enough in successful state is tracked, until the face of accumulative tracking to expected quantity is crucial
Point;
The face information computing module hands over computation model to count the Eulerian angles of face by face deflection trained in advance
It calculates, completes face posture estimation.
9. according to claim 8 applied to cross-platform face key point identification and tracing system, it is characterised in that: institute
Stating face image processing module further includes that face key point robustness strengthens unit, with human face region and key point tracing unit phase
Even, robustness intensive treatment is carried out to present frame facial image key point using stratification Kalman filtering algorithm.
10. according to claim 8 applied to cross-platform face key point identification and tracing system, it is characterised in that:
The face image processing module includes face key point calculation unit, strengthens unit with face key point robustness and is connected, base
Each key point is associated in triangulation and extends three arms of angle of specific key point composition, forms focus, focus position
The key point of the human face or face peripheral location as estimated extends face key point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910688610.8A CN110399844A (en) | 2019-07-29 | 2019-07-29 | It is a kind of to be identified and method for tracing and system applied to cross-platform face key point |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910688610.8A CN110399844A (en) | 2019-07-29 | 2019-07-29 | It is a kind of to be identified and method for tracing and system applied to cross-platform face key point |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110399844A true CN110399844A (en) | 2019-11-01 |
Family
ID=68326443
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910688610.8A Pending CN110399844A (en) | 2019-07-29 | 2019-07-29 | It is a kind of to be identified and method for tracing and system applied to cross-platform face key point |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110399844A (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110826487A (en) * | 2019-11-06 | 2020-02-21 | 中山职业技术学院 | Facial expression data acquisition method |
CN110852321A (en) * | 2019-11-11 | 2020-02-28 | 北京百度网讯科技有限公司 | Candidate frame filtering method and device and electronic equipment |
CN110866500A (en) * | 2019-11-19 | 2020-03-06 | 上海眼控科技股份有限公司 | Face detection alignment system, method, device, platform, mobile terminal and storage medium |
CN110879995A (en) * | 2019-12-02 | 2020-03-13 | 上海秒针网络科技有限公司 | Target object detection method and device, storage medium and electronic device |
CN111079659A (en) * | 2019-12-19 | 2020-04-28 | 武汉水象电子科技有限公司 | Face feature point positioning method |
CN111079625A (en) * | 2019-12-11 | 2020-04-28 | 江苏国光信息产业股份有限公司 | Control method for camera to automatically rotate along with human face |
CN111209873A (en) * | 2020-01-09 | 2020-05-29 | 杭州趣维科技有限公司 | High-precision face key point positioning method and system based on deep learning |
CN111311634A (en) * | 2020-01-23 | 2020-06-19 | 支付宝实验室(新加坡)有限公司 | Face image detection method, device and equipment |
CN111523524A (en) * | 2020-07-02 | 2020-08-11 | 江苏原力数字科技股份有限公司 | Facial animation capturing and correcting method based on machine learning and image processing |
CN111538344A (en) * | 2020-05-14 | 2020-08-14 | 重庆科技学院 | Intelligent wheelchair based on face key point motion following and control method thereof |
CN111666866A (en) * | 2020-06-02 | 2020-09-15 | 中电福富信息科技有限公司 | Cross-platform off-line multi-thread face recognition method based on OpenCV |
CN111882408A (en) * | 2020-09-27 | 2020-11-03 | 北京达佳互联信息技术有限公司 | Virtual trial method and device, electronic equipment and storage equipment |
CN111954055A (en) * | 2020-07-01 | 2020-11-17 | 北京达佳互联信息技术有限公司 | Video special effect display method and device, electronic equipment and storage medium |
CN112037010A (en) * | 2020-08-12 | 2020-12-04 | 无锡锡商银行股份有限公司 | Application method and device of multi-scene risk rating model based on SSR-Net in personal loan and storage medium |
CN112329602A (en) * | 2020-11-02 | 2021-02-05 | 平安科技(深圳)有限公司 | Method and device for acquiring face annotation image, electronic equipment and storage medium |
CN112488064A (en) * | 2020-12-18 | 2021-03-12 | 平安科技(深圳)有限公司 | Face tracking method, system, terminal and storage medium |
CN112818938A (en) * | 2021-03-03 | 2021-05-18 | 长春理工大学 | Face recognition algorithm and face recognition device adaptive to illumination interference environment |
CN113255608A (en) * | 2021-07-01 | 2021-08-13 | 杭州智爱时刻科技有限公司 | Multi-camera face recognition positioning method based on CNN classification |
CN113723437A (en) * | 2021-04-02 | 2021-11-30 | 荣耀终端有限公司 | Automatic training method of AI model and AI model training system |
CN114663835A (en) * | 2022-03-21 | 2022-06-24 | 合肥工业大学 | Pedestrian tracking method, system, equipment and storage medium |
CN115909508A (en) * | 2023-01-06 | 2023-04-04 | 浙江大学计算机创新技术研究院 | Image key point enhancement detection method under single-person sports scene |
CN117473116A (en) * | 2023-10-09 | 2024-01-30 | 深圳市金大智能创新科技有限公司 | Control method of active reminding function based on virtual person |
-
2019
- 2019-07-29 CN CN201910688610.8A patent/CN110399844A/en active Pending
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110826487A (en) * | 2019-11-06 | 2020-02-21 | 中山职业技术学院 | Facial expression data acquisition method |
CN110826487B (en) * | 2019-11-06 | 2023-11-03 | 中山职业技术学院 | Facial expression data acquisition method |
CN110852321B (en) * | 2019-11-11 | 2022-11-22 | 北京百度网讯科技有限公司 | Candidate frame filtering method and device and electronic equipment |
CN110852321A (en) * | 2019-11-11 | 2020-02-28 | 北京百度网讯科技有限公司 | Candidate frame filtering method and device and electronic equipment |
CN110866500A (en) * | 2019-11-19 | 2020-03-06 | 上海眼控科技股份有限公司 | Face detection alignment system, method, device, platform, mobile terminal and storage medium |
CN110879995A (en) * | 2019-12-02 | 2020-03-13 | 上海秒针网络科技有限公司 | Target object detection method and device, storage medium and electronic device |
CN111079625A (en) * | 2019-12-11 | 2020-04-28 | 江苏国光信息产业股份有限公司 | Control method for camera to automatically rotate along with human face |
CN111079625B (en) * | 2019-12-11 | 2023-10-27 | 江苏国光信息产业股份有限公司 | Control method for automatically following rotation of camera along with face |
CN111079659A (en) * | 2019-12-19 | 2020-04-28 | 武汉水象电子科技有限公司 | Face feature point positioning method |
CN111209873A (en) * | 2020-01-09 | 2020-05-29 | 杭州趣维科技有限公司 | High-precision face key point positioning method and system based on deep learning |
CN111311634A (en) * | 2020-01-23 | 2020-06-19 | 支付宝实验室(新加坡)有限公司 | Face image detection method, device and equipment |
CN111311634B (en) * | 2020-01-23 | 2024-02-27 | 支付宝实验室(新加坡)有限公司 | Face image detection method, device and equipment |
CN111538344A (en) * | 2020-05-14 | 2020-08-14 | 重庆科技学院 | Intelligent wheelchair based on face key point motion following and control method thereof |
CN111666866A (en) * | 2020-06-02 | 2020-09-15 | 中电福富信息科技有限公司 | Cross-platform off-line multi-thread face recognition method based on OpenCV |
CN111954055B (en) * | 2020-07-01 | 2022-09-02 | 北京达佳互联信息技术有限公司 | Video special effect display method and device, electronic equipment and storage medium |
CN111954055A (en) * | 2020-07-01 | 2020-11-17 | 北京达佳互联信息技术有限公司 | Video special effect display method and device, electronic equipment and storage medium |
CN111523524A (en) * | 2020-07-02 | 2020-08-11 | 江苏原力数字科技股份有限公司 | Facial animation capturing and correcting method based on machine learning and image processing |
CN112037010A (en) * | 2020-08-12 | 2020-12-04 | 无锡锡商银行股份有限公司 | Application method and device of multi-scene risk rating model based on SSR-Net in personal loan and storage medium |
CN111882408A (en) * | 2020-09-27 | 2020-11-03 | 北京达佳互联信息技术有限公司 | Virtual trial method and device, electronic equipment and storage equipment |
CN112329602A (en) * | 2020-11-02 | 2021-02-05 | 平安科技(深圳)有限公司 | Method and device for acquiring face annotation image, electronic equipment and storage medium |
CN112488064A (en) * | 2020-12-18 | 2021-03-12 | 平安科技(深圳)有限公司 | Face tracking method, system, terminal and storage medium |
CN112488064B (en) * | 2020-12-18 | 2023-12-22 | 平安科技(深圳)有限公司 | Face tracking method, system, terminal and storage medium |
CN112818938A (en) * | 2021-03-03 | 2021-05-18 | 长春理工大学 | Face recognition algorithm and face recognition device adaptive to illumination interference environment |
CN113723437A (en) * | 2021-04-02 | 2021-11-30 | 荣耀终端有限公司 | Automatic training method of AI model and AI model training system |
CN113255608A (en) * | 2021-07-01 | 2021-08-13 | 杭州智爱时刻科技有限公司 | Multi-camera face recognition positioning method based on CNN classification |
CN114663835A (en) * | 2022-03-21 | 2022-06-24 | 合肥工业大学 | Pedestrian tracking method, system, equipment and storage medium |
CN115909508A (en) * | 2023-01-06 | 2023-04-04 | 浙江大学计算机创新技术研究院 | Image key point enhancement detection method under single-person sports scene |
CN117473116A (en) * | 2023-10-09 | 2024-01-30 | 深圳市金大智能创新科技有限公司 | Control method of active reminding function based on virtual person |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110399844A (en) | It is a kind of to be identified and method for tracing and system applied to cross-platform face key point | |
CN109829436B (en) | Multi-face tracking method based on depth appearance characteristics and self-adaptive aggregation network | |
CN110472554B (en) | Table tennis action recognition method and system based on attitude segmentation and key point features | |
CN109858368B (en) | Rosenbrock-PSO-based face recognition attack defense method | |
CN108090918A (en) | A kind of Real-time Human Face Tracking based on the twin network of the full convolution of depth | |
CN110334635A (en) | Main body method for tracing, device, electronic equipment and computer readable storage medium | |
CN108960047B (en) | Face duplication removing method in video monitoring based on depth secondary tree | |
CN111274916A (en) | Face recognition method and face recognition device | |
CN107862240B (en) | Multi-camera collaborative face tracking method | |
CN110223322A (en) | Image-recognizing method, device, computer equipment and storage medium | |
CN111353385B (en) | Pedestrian re-identification method and device based on mask alignment and attention mechanism | |
CN109448027A (en) | A kind of adaptive, lasting motion estimate method based on algorithm fusion | |
CN108460340A (en) | A kind of gait recognition method based on the dense convolutional neural networks of 3D | |
CN112541421B (en) | Pedestrian reloading and reloading recognition method for open space | |
CN111476077A (en) | Multi-view gait recognition method based on deep learning | |
CN110276831A (en) | Constructing method and device, equipment, the computer readable storage medium of threedimensional model | |
CN110175574A (en) | A kind of Road network extraction method and device | |
CN112150692A (en) | Access control method and system based on artificial intelligence | |
CN116740539A (en) | Visual SLAM method and system based on lightweight target detection network | |
CN109543629A (en) | A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing | |
CN108364303A (en) | A kind of video camera intelligent-tracking method with secret protection | |
CN114519897B (en) | Human face living body detection method based on color space fusion and cyclic neural network | |
CN113449663B (en) | Collaborative intelligent security method and device based on polymorphic fitting | |
CN109978779A (en) | A kind of multiple target tracking device based on coring correlation filtering method | |
CN109064497A (en) | A kind of video tracing method based on color cluster accretion learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |