CN107122705A - Face critical point detection method based on three-dimensional face model - Google Patents

Face critical point detection method based on three-dimensional face model Download PDF

Info

Publication number
CN107122705A
CN107122705A CN201710159215.1A CN201710159215A CN107122705A CN 107122705 A CN107122705 A CN 107122705A CN 201710159215 A CN201710159215 A CN 201710159215A CN 107122705 A CN107122705 A CN 107122705A
Authority
CN
China
Prior art keywords
msub
mrow
msup
parameter
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710159215.1A
Other languages
Chinese (zh)
Other versions
CN107122705B (en
Inventor
朱翔昱
雷震
刘浩
李子青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201710159215.1A priority Critical patent/CN107122705B/en
Publication of CN107122705A publication Critical patent/CN107122705A/en
Application granted granted Critical
Publication of CN107122705B publication Critical patent/CN107122705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of face critical point detection method based on three-dimensional face model, comprise the following steps:Step 01, the initial parameter of facial image and three-dimensional face model is obtained in face training sample;Step 02, attitude-adaptive feature and normalization codes co-ordinates are generated according to the facial image and initial parameter;Step 03, enter line translation using convolutional neural networks to the attitude-adaptive feature and normalization codes co-ordinates respectively to merge, obtain the parameter residual error of true residue and initial parameter;Step 04, the initial parameter is updated according to the parameter residual error, goes to step 02 until the parameter residual error reaches predetermined threshold value;Step 05, the three-dimensional face model is updated using the parameter residual error for reaching predetermined threshold value, gathers the face key point on the three-dimensional face model.In the present invention, the face critical point detection under full posture is realized.

Description

Face critical point detection method based on three-dimensional face model
Technical field
The invention belongs to image processing and pattern recognition field, and in particular to a kind of people based on three-dimensional face model Face critical point detection method.
Background technology
Face key point is a series of with fixed semantic point, such as canthus, nose and the corners of the mouth, based on people on face In the computer vision that face understands, detection key point is an important pre-treatment step.Most human face analysis systems are all Need to carry out critical point detection first, an accurately understanding is distributed with the face to face, so that in the specific bit of face Put extraction feature.But most of critical point detection methods can only be all handled below medium posture at present, i.e., deflection angle (yaw) is small In the face critical point detection of (deflection angle can reach 90 °) under 45 ° of face, big posture be always difficult point.
Challenged present in it in terms of mainly having three below:First, in traditional critical point detection algorithm, it is assumed that all Key point all has stable external performance to be detected.However, under big posture, some key points can not can be kept away Exempt from due to becoming invisible from blocking, these invisible points are made because its presentation information is blocked and can not be detected Into conventional method failure;Secondly, it is more complicated in the presentation change of big posture human face, side can be changed to from front, this It is required that location algorithm must more robust to understand the face presentation under different postures;Finally in terms of training data, demarcation is big The key point of posture human face is relatively difficult, needs to guess its position, most of existing number for sightless key point Under being all medium posture according to the face in storehouse, databases of the minority comprising big posture face are also only labelled with visible key point, It is difficult to design the key point algorithm of any posture of processing.
One possible solution it is directly to fit three-dimensional face model by image in the prior art.Usually using level The convolutional neural networks of connection enter line translation to an input picture, return out the parameter of three-dimensional face model.But the technology is deposited In following defect:First, the technology states the rotation of face using Eulerian angles, and Eulerian angles can be due to universal joint under big posture Deadlock and produce ambiguity;Secondly, the input feature vector of image aspects is used only in the technology, i.e., original image is sent directly into convolution Neutral net, and intermediate result image can be used progressively to be corrected in cascade, so as to further lift fitting precision;Most Afterwards, priority of the technology in training convolutional neural networks not to model parameter is effectively modeled, and makes convolutional Neural net The fitting performance of network is dispersed on some minor parameters.
The content of the invention
In order to solve above mentioned problem of the prior art, the present invention proposes a kind of face based on three-dimensional face model and closed Key point detecting method, to realize the face critical point detection under full posture.
This method comprises the following steps:
Step 01, the initial parameter of facial image and three-dimensional face model is extracted in face training sample;
Step 02, attitude-adaptive feature and normalization codes co-ordinates are generated according to the facial image and initial parameter;
Step 03, the attitude-adaptive feature and normalization codes co-ordinates are become using convolutional neural networks respectively Fusion is changed, the parameter residual error of true residue and initial parameter is obtained;
Step 04, the initial parameter is updated according to the parameter residual error, goes to step 02 until the parameter residual error reaches To predetermined threshold value;
Step 05, the three-dimensional face model is updated using the parameter residual error for reaching predetermined threshold value, gathers the three-dimensional people Face key point on face model.
Preferably, the three-dimensional face model is carried out when generating the attitude-adaptive feature in the step 02 Projection, formula during projection includes:
Wherein, V (p) is the function for constructing three-dimensional face model and projecting, and can obtain each key point on threedimensional model and is scheming As upper two-dimensional coordinate,Represent the average shape of face, AidRepresent the PCA master extracted on the three-dimensional face of neutral expression Into split axle, αidRepresent form parameter, AexpRepresent the PCA principal component axles extracted in the difference of expression face and neutral face, αexpTable Show expression parameter, f is zoom factor, and Pr is direct projective matrix, and R is spin matrix, by four-tuple [q0,q1,q2,q3] build, t2dFor translation vector, fit object parameter is [f, R, t2didexp], fit object parameter sets are [f, q0,q1,q2,q3, t2didexp]。
Preferably, it is described by four-tuple [q0,q1,q2,q3] build spin matrix formula be:
Preferably, attitude-adaptive feature is generated in the step 02 includes:
The TWO-DIMENSIONAL CIRCULAR CYLINDER coordinate on each summit of the three-dimensional face model is calculated, and between azimuth axis and altitude axis are first-class Every n*n anchor point of ground sampling;In fit procedure, these anchor points are carried out with deformation, scaling, rotation using the parameter of "current" model Position of the anchor point on image is obtained with translation, attitude-adaptive feature is generated.
Preferably, generation normalization codes co-ordinates include equation below in the step 02:
PNCC (I, p)=I&ZBuffer (V3d(p),NCC)
Wherein, PNCC is normalization codes co-ordinates, and I is the facial image of input, and p is parameter current, and & is in channel dimension Stacking computing, function ZBuffer be after three-dimensional dough sheet is rendered using texture generate two dimensional image a function, V3d(p) It is the three-dimensional face after scaling rotation translation deformation, the figure for being stacked together generation is normalization codes co-ordinates.
Preferably, the step 03 is specifically included:
According to two parallel-convolution neutral nets, the attitude-adaptive feature and normalization codes co-ordinates are carried out respectively Conversion, and the feature after conversion is merged using an extra full articulamentum, fusion results return to be joined Number residual error.
Preferably, the calculation formula of parameter residual error is in the step 03:
Δpk=Netk(PAF(pk,I),PNCC(pk,I))
Wherein, pkFor parameter current, I is input picture, Δ pkFor parameter current and the residual error of true residue, PAF is posture Self-adaptive features, PNCC is normalization codes co-ordinates, NetkFor two-way parallel-convolution neutral net.
Preferably, the step 03 also includes being trained the convolutional neural networks, in training to described true Residual error is weighted processing, and formula is:
Wherein, pc=p0+ Δ p, 0≤w≤1, w are parameter weights, and Δ p is the output of convolutional neural networks, pgTo be true residual Difference, p0For the input parameter of current iteration, pcFor parameter current, V (p) is deformation and weak perspective projection function, and diag is to angular moment Battle array construction.
Preferably, it is specially that the parameter is residual to update the initial parameter according to the parameter residual error in the step 04 It is poor to be added with the initial parameter.
Compared with prior art, the present invention at least has advantages below:
By the face critical point detection method based on three-dimensional face model in the present invention, the face under full posture is realized Critical point detection.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the face critical point detection method provided by the present invention based on three-dimensional face model;
Fig. 2 is two-way parallel-convolution Processing with Neural Network schematic flow sheet provided by the present invention.
Embodiment
The preferred embodiment of the present invention described with reference to the accompanying drawings.It will be apparent to a skilled person that this A little embodiments are used only for explaining the technical principle of the present invention, it is not intended that limit the scope of the invention.
The invention discloses a kind of face critical point detection method based on three-dimensional face model, as shown in figure 1, including such as Lower step:
Step 00, three-dimensional variable faceform is built.
Three-dimensional face point cloud sample is obtained by three-dimensional scanner, and three-dimensional variable is built using principal component analysis (PCA) Model:
Wherein S represents three-dimensional face,Represent the average shape of face, AidRepresent that the three-dimensional face in neutral expression is above carried The PCA principal component axles of taking-up, αidRepresent form parameter, AexpRepresent the PCA that is extracted in the difference of expression face and neutral face it is main into Split axle, αexpRepresent expression parameter.
Construct after three-dimensional face model, be projected into using weak perspective projection on the plane of delineation:
Wherein, V (p) is the function for constructing faceform and projecting, and can obtain two of each point on threedimensional model on image Dimension coordinate, f is zoom factor, and Pr is direct projective matrix, and R is spin matrix, t2dFor translation vector;The target component being so fitted For [f, R, t2didexp]。
Traditionally human face posture is generally represented with Eulerian angles, including pitching, deflection and rolling.However, when deflection angle is close 90 ° when i.e. posture is close to side, Eulerian angles can be made the problem of universal joint deadlock to produce the different Eulerian angles of ambiguity, i.e., two can Identical spin matrix can be corresponded to.Therefore, we employ four-tuple [q0,q1,q2,q3] represent spin matrix, and will Zoom factor f is incorporated into this matrix, and the model parameter collection so obtained is combined into:
[f,q0,q1,q2,q3,t2didexp]。
Using three-dimensional variable, faceform is used as fit object.The training sample demarcated by hand based on face key point (or using training sample based on disclosed face key point data set), and face sideization technology is used on this basis It will be rotated outside face carry out face, generation becomes the bigger and more rich face training sample set of corner.
Step 01, facial image and initial parameter are extracted.
Step 02, generation attitude-adaptive feature and normalization codes co-ordinates.
The three-dimensional face model fitting algorithm based on convolutional neural networks is described below, i.e., how to use convolutional neural networks Estimate the posture, shape and expression parameter of face.For the input of convolutional neural networks, we devise two kinds of input feature vectors, It is the normalization coding of attitude-adaptive feature and projection respectively.
First, attitude-adaptive feature (Pose Adaptive Feature-PAF) is illustrated.
In convolutional neural networks, traditional convolutional layer is to carry out convolution pixel-by-pixel along two dimensional image axle, and in PAF Convolution is some fixation semantic locations progress in face.The position that PAF carries out convolution algorithm is obtained by following approach:Consider Can be roughly approximate with cylinder to face, we calculate the TWO-DIMENSIONAL CIRCULAR CYLINDER coordinate on each summit of three-dimensional face model, and in side Equally spaced sampled on parallactic angle axle and altitude axis n*n anchor point.In fit procedure, current model parameter p is given, we throw Shadow three-dimensional face model simultaneously obtains the position of anchor point on 2d, and the position of convolution algorithm is carried out as PAF.Notice Convolution algorithm on anchor point forms n*n figure, can subsequently carry out traditional convolution algorithm.In order to reduce at occlusion area The influence of feature, and we are by the response at occlusion area divided by 2, generate attitude-adaptive feature.
Illustrate normalization codes co-ordinates (the Projected Normalized Cooridnate Code- of projection below PNCC).This input feature vector depends on a kind of new codes co-ordinates, first normalizes to three-dimensional average face in three dimensions On 0-1:
Put and be all uniquely distributed on [0,0,0] to [1,1,1] on threedimensional model after normalization, therefore one kind can be regarded as Three-dimensional codes co-ordinates, we term it normalization codes co-ordinates.With usually used numbering (such as 0,1 ..., n) different, normalizing It is continuous in three dimensions to change codes co-ordinates.In fit procedure, current model parameter p is given, we use ZBuffer Algorithm renders the three-dimensional face of projection with normalization codes co-ordinates:
PNCC (I, p)=I&ZBuffer (V3d(p),NCC)
Wherein, PNCC is normalization codes co-ordinates, and I is the facial image of input, and p is parameter current, and & is in channel dimension Stacking computing, function ZBuffer be after three-dimensional dough sheet is rendered using texture generate two dimensional image a function, V3d(p) It is the three-dimensional face after scaling rotation translation deformation, the figure for being stacked together generation is normalization codes co-ordinates, will enter into convolution Neutral net.
The feature of both generations has complementarity, wherein the normalization coding projected belongs to the feature of image aspects, its Feature is original image to be sent directly into convolutional neural networks.And attitude-adaptive feature belongs to the feature at model visual angle, its Feature is that the intermediate result that can use fitting is corrected to artwork.The normalization coding of projection will be due to that will include whole face figure Picture, therefore image context information is more abundant, is adapted to Face detection and thick fitting, relatively attaches most importance in initial iteration several times Will;Attitude-adaptive feature is equivalent to using "current" model parameter to the people in image due to carrying out convolution algorithm at anchor point Face is positioned and corrected, and progressively simplifies fitting task, is adapted to the fitting in details, important in final iteration several times.
Step 03, fusion treatment is converted, parameter residual error is obtained.
It can be seen that, there is complementary relationship in both above-mentioned features, in order to make full use of the advantage of both features, we K iteration is carried out using two-way parallel-convolution neural network structure.In kth time iteration, an initial parameter p is givenk, we Use pkThe normalization codes co-ordinates feature of attitude-adaptive feature and projection is generated, and trains a two-way as shown in Figure 2 simultaneously Row convolutional neural networks, wherein attitude-adaptive feature branch include 5 convolutional layers, 4 pond layers and a full articulamentum.Throw The normalization codes co-ordinates branch of shadow includes the convolutional layer of an attitude-adaptive, three common convolutional layers, three pond layers with One full articulamentum.The parallel two-way neutral net of the Web vector graphic enters line translation to two features respectively, and is connected entirely with one Layer is connect to be merged.The final feature merged out is used for returning out the residual error of parameter current and target component:
Δpk=Netk(PAF(pk,I),PNCC(pk,I))
Wherein, pkFor parameter current, I is input picture, Δ pkFor parameter current and the residual error of true residue, PAF is posture Self-adaptive features, PNCC is normalization codes co-ordinates, NetkFor two-way parallel-convolution neutral net.
Be described below how training convolutional neural networks, its basic thought is the parameter residual error that recurrence close to true Parameter residual error.Importance yet with faceform's parameter is different, and the importance of a small number of parameters (such as posture) is significantly larger than Exhausted most regions parameter, it is therefore desirable to which the loss in training to each parameter is weighted.Weights are mutual in traditional algorithm Independent, generally by being manually specified or being determined according to " loss produced by mistakenly estimating certain parameter ".But between parameter Weights be correlative, such as before attitude parameter is accurate enough, estimation expression parameter is nonsensical.The present invention passes through Optimize an energy function uniformly to obtain the weights of all parameters, devise the parameter distance loss of following optimal weighting (Optimized Weighted Parameter Distance Cost-OWPDC):
Eowpdc=(Δ p- (pg-p0))Tdiag(w*)(Δp-(pg-p0))
Wherein, pc=p0+ Δ p, 0≤w≤1, w are parameter weights, and Δ p is the output of convolutional neural networks, pgTo be true residual Difference, p0For the input parameter of current iteration, pcFor parameter current, V (p) is deformation and weak perspective projection function, and diag is to angular moment Battle array construction.
As shown by the equation, by by true residue diag (w) * (p of weightingg-pc) it is added to parameter current pcIn, it is desirable to The three-dimensional face constructed by parameter after renewal is more nearly real human face V (pg).Simultaneously as the capability of fitting of neutral net It is limited, therefore use λ | | diag (w) * (pg-pc)||2To model the pressure that fitting parameter current is produced to neutral net, add Into loss item, it is expected that neutral net can assign a weighting to the higher parameter of cost performance.
In the training process, optimal w is asked for each sample excessively complicated, therefore by V (pc+diag(w)*(pg-pc)) In pgPlace is obtained using Taylor expansion:
||V′(pg)*diag(w-1)*Δpc||2+λ||diag(w)*Δpc||2
Wherein, V ' (pg) it is V (pg) Jacobian matrixes, above formula is deployed and removes constant term, is obtained:
wT(diag(Δpc)V′(pg)TV′(pg)diag(Δpc))w-2*1T(diag(Δpc)V′(pg)TV′(pg)diag (Δpc))w
-λ*wTdiag(Δpc.*Δpc)w
Make H=V ' (pg)diag(Δpc), then original optimization problem can be write:
0≤w≤1
Above formula is a standard quadratic programming problem, can rapidly be solved with interior point method.But in the loss function H meter Calculate and take very much, recalculating H when training each sample makes the training time unacceptable.This experiment find H it is unique very Several is V ' (pg), and to each training sample V ' (pg) it is fixed.Therefore before training, can be by the V ' of each sample (pg) calculate and store, directly read in training.The weighting that parameters lose in the weights asked for i.e. OWPDC, The priority of parameters can be described.
Step 04, the initial parameter is updated according to the parameter residual error.
Input parameter is added with parameter residual error afterwards, a more preferable parameter p is obtainedk+1=pk+Δpk, and carry out next time Iteration, includes the parameter Estimation of input feature vector construction and convolutional neural networks.Carry out after K iteration so that parameter residual error reaches To after predetermined threshold value, V (p are usedk) obtain position of the every bit on image on three-dimensional face.
Step 05, the face key point on collection three-dimensional face model.
Because existing face key point training sample is usually within medium posture, and the present invention is by existing training sample Rotated outside this progress face, generate the training sample under big posture, it is specific as follows:
A training sample, including facial image and the key point of manual demarcation are given, three based on key point are used Dimension faceform's fitting can obtain the threedimensional model of face in image.Then equably sampled in background area some anchor points. To each anchor point, its depth is estimated according to the point on the three-dimensional face model away from its nearest neighbours.Obtain all anchor points After depth, anchor point is constituted into a series of tri patch using trigonometric ratio.These dough sheets are constituted together with the three-dimensional face fitted The depth information of image.Being somebody's turn to do " virtual depth image " can carry out rotating outside face in three dimensions, and carry out at any angle Render, presentation of the face under different postures in generation image.The present invention expands deflection angle with 5 ° for step-length, gradually generates one Serial virtual sample, until 90 °.
It can not position from the defect for blocking key point, directly be entered by image instant invention overcomes traditional critical point detection algorithm Row three-dimensional face model is fitted, and up-samples out key point from the three-dimensional face being fitted.During human face fitting, except Using the normalization codes co-ordinates of the Projection Character of image aspects, certain moduli type visual angle characteristic " attitude-adaptive have also been devised Feature ", this feature can use fitting intermediate result to carry out implicit frontization to image, so as to progressively simplify fitting task, enter One step lifts fitting precision.Because image aspects feature and model visual angle characteristic have complementary relationship, in order to combine two kinds of features Advantage, line translation entered to two kinds of input feature vectors and merged simultaneously using two-way parallel-convolution neutral net, it is final to use fusion The feature gone out carries out model parameter recurrence.When training convolutional neural networks, the present invention is by considering faceform's parameter Priority and convolutional neural networks emphasis is fitted some important parameters and further increase fitting precision.Finally so that this hair It is bright to realize the face critical point detection of full posture.
So far, combined preferred embodiment shown in the drawings describes technical scheme, still, this area Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these embodiments.Without departing from this On the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to correlation technique feature, these Technical scheme after changing or replacing it is fallen within protection scope of the present invention.

Claims (9)

1. a kind of face critical point detection method based on three-dimensional face model, it is characterised in that comprise the following steps:
Step 01, the initial parameter of facial image and three-dimensional face model is obtained in face training sample;
Step 02, attitude-adaptive feature and normalization codes co-ordinates are generated according to the facial image and initial parameter;
Step 03, are entered by line translation using convolutional neural networks and is melted for the attitude-adaptive feature and normalization codes co-ordinates respectively Close, obtain the parameter residual error of true residue and initial parameter;
Step 04, the initial parameter is updated according to the parameter residual error, go to step 02 until the parameter residual error reach it is pre- If threshold value;
Step 05, the three-dimensional face model is updated using the parameter residual error for reaching predetermined threshold value, gathers the three-dimensional face mould Face key point in type.
2. the face critical point detection method based on three-dimensional face model according to claim 1, it is characterised in that the step In rapid 02 when generating the attitude-adaptive feature, the three-dimensional face model is projected, formula during projection includes:
<mrow> <mi>V</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>Pr</mi> <mo>*</mo> <mi>R</mi> <mo>*</mo> <mrow> <mo>(</mo> <mover> <mi>S</mi> <mo>&amp;OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>A</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> </msub> <msub> <mi>&amp;alpha;</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>A</mi> <mi>exp</mi> </msub> <msub> <mi>&amp;alpha;</mi> <mi>exp</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>t</mi> <mrow> <mn>2</mn> <mi>d</mi> </mrow> </msub> </mrow>
Wherein, V (p) is the function for constructing three-dimensional face model and projecting, and can obtain on threedimensional model each key point on image Two-dimensional coordinate, S represents the average shape of face, AidRepresent the PCA principal components extracted on the three-dimensional face of neutral expression Axle, αidRepresent form parameter, AexpRepresent the PCA principal component axles extracted in the difference of expression face and neutral face, αexpRepresent table Feelings parameter, f is zoom factor, and Pr is direct projective matrix, and R is spin matrix, by four-tuple [q0,q1,q2,q3] build, t2dIt is flat The amount of shifting to, fit object parameter is [f, R, t2didexp], fit object parameter sets are [f, q0,q1,q2,q3,t2did, αexp]。
3. the face critical point detection method based on three-dimensional face model according to claim 1, it is characterised in that it is described by Four-tuple [q0,q1,q2,q3] build spin matrix formula be:
<mrow> <mi>R</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msup> <msub> <mi>q</mi> <mn>0</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>q</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <msup> <msub> <mi>q</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <msup> <msub> <mi>q</mi> <mn>3</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>q</mi> <mn>0</mn> </msub> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>-</mo> <msub> <mi>q</mi> <mn>0</mn> </msub> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>q</mi> <mn>0</mn> </msub> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>q</mi> <mn>0</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <msup> <msub> <mi>q</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>q</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <msup> <msub> <mi>q</mi> <mn>3</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mn>0</mn> </msub> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>q</mi> <mn>2</mn> </msub> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>+</mo> <msub> <mi>q</mi> <mn>0</mn> </msub> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mn>2</mn> </msub> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>-</mo> <msub> <mi>q</mi> <mn>0</mn> </msub> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>q</mi> <mn>0</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <msup> <msub> <mi>q</mi> <mn>1</mn> </msub> <mn>2</mn> </msup> <mo>-</mo> <msup> <msub> <mi>q</mi> <mn>2</mn> </msub> <mn>2</mn> </msup> <mo>+</mo> <msup> <msub> <mi>q</mi> <mn>3</mn> </msub> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
4. the face critical point detection method based on three-dimensional face model according to claim 3, it is characterised in that the step Attitude-adaptive feature is generated in rapid 02 to be included:
The TWO-DIMENSIONAL CIRCULAR CYLINDER coordinate on each summit of the three-dimensional face model is calculated, and in azimuth axis and altitude axis equally spaced N*n anchor point of sampling;In fit procedure, these anchor points are carried out with deformation, scaling, rotation peace using the parameter of "current" model Shifting obtains position of the anchor point on image, generates attitude-adaptive feature.
5. the face critical point detection method based on three-dimensional face model according to claim 3, it is characterised in that the step Generation normalization codes co-ordinates include equation below in rapid 02:
PNCC (I, p)=I&ZBuffer (V3d(p),NCC)
<mrow> <msub> <mi>V</mi> <mrow> <mn>3</mn> <mi>d</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mo>*</mo> <mi>R</mi> <mo>*</mo> <mrow> <mo>(</mo> <mover> <mi>S</mi> <mo>&amp;OverBar;</mo> </mover> <mo>+</mo> <msub> <mi>A</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> </msub> <msub> <mi>&amp;alpha;</mi> <mrow> <mi>i</mi> <mi>d</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>A</mi> <mi>exp</mi> </msub> <msub> <mi>&amp;alpha;</mi> <mi>exp</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>t</mi> <mrow> <mn>3</mn> <mi>d</mi> </mrow> </msub> </mrow>
Wherein, PNCC is normalization codes co-ordinates, and I is the facial image of input, and p is parameter current, and & is the heap in channel dimension Folded computing, function ZBuffer is the function that two dimensional image is generated after three-dimensional dough sheet is rendered using texture, V3d(p) it is contracting The three-dimensional face after rotation translation deformation is put, the figure for being stacked together generation is normalization codes co-ordinates.
6. the face critical point detection method based on three-dimensional face model according to claim 1, it is characterised in that the step Rapid 03 specifically includes:
According to two parallel-convolution neutral nets, the attitude-adaptive feature and normalization codes co-ordinates are become respectively Change, and the feature after conversion is merged using an extra full articulamentum, fusion results return to obtain parameter Residual error.
7. the face critical point detection method based on three-dimensional face model according to claim 6, it is characterised in that the step The calculation formula of parameter residual error is in rapid 03:
Δpk=Netk(PAF(pk,I),PNCC(pk,I))
Wherein, pkFor parameter current, I is input picture, Δ pkFor parameter current and the residual error of true residue, PAF is that posture is adaptive Feature is answered, PNCC is normalization codes co-ordinates, NetkFor two-way parallel-convolution neutral net.
8. the face critical point detection method based on three-dimensional face model, its feature according to any one of claim 1~7 It is, the step 03 also includes being trained the convolutional neural networks, and the true residue is added in training Power is handled, and formula is:
<mrow> <msup> <mi>w</mi> <mo>*</mo> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>w</mi> </munder> <mo>|</mo> <mo>|</mo> <mi>V</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>c</mi> </msup> <mo>+</mo> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mo>(</mo> <mi>w</mi> <mo>)</mo> <mo>*</mo> <mo>(</mo> <mrow> <msup> <mi>p</mi> <mi>g</mi> </msup> <mo>-</mo> <msup> <mi>p</mi> <mi>c</mi> </msup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>V</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>g</mi> </msup> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mi>g</mi> </msup> <mo>-</mo> <msup> <mi>p</mi> <mi>c</mi> </msup> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
Wherein, pc=p0+ Δ p, 0≤w≤1, w are parameter weights, and Δ p is the output of convolutional neural networks, pgFor true residue, p0 For the input parameter of current iteration, pcFor parameter current, V (p) is deformation and weak perspective projection function, and diag is diagonal matrix structure Make.
9. the face critical point detection method based on three-dimensional face model, its feature according to any one of claim 1~7 Be, updated in the step 04 according to the parameter residual error initial parameter be specially by the parameter residual error with it is described just Beginning parameter is added.
CN201710159215.1A 2017-03-17 2017-03-17 Face key point detection method based on three-dimensional face model Active CN107122705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710159215.1A CN107122705B (en) 2017-03-17 2017-03-17 Face key point detection method based on three-dimensional face model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710159215.1A CN107122705B (en) 2017-03-17 2017-03-17 Face key point detection method based on three-dimensional face model

Publications (2)

Publication Number Publication Date
CN107122705A true CN107122705A (en) 2017-09-01
CN107122705B CN107122705B (en) 2020-05-19

Family

ID=59717971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710159215.1A Active CN107122705B (en) 2017-03-17 2017-03-17 Face key point detection method based on three-dimensional face model

Country Status (1)

Country Link
CN (1) CN107122705B (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729904A (en) * 2017-10-09 2018-02-23 广东工业大学 A kind of face pore matching process based on the limitation of 3 D deformation face
CN107944367A (en) * 2017-11-16 2018-04-20 北京小米移动软件有限公司 Face critical point detection method and device
CN107967454A (en) * 2017-11-24 2018-04-27 武汉理工大学 Take the two-way convolutional neural networks Classification in Remote Sensing Image method of spatial neighborhood relation into account
CN108229313A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 Face identification method and device, electronic equipment and computer program and storage medium
CN108320274A (en) * 2018-01-26 2018-07-24 东华大学 It is a kind of to recycle the infrared video colorization method for generating confrontation network based on binary channels
CN108764048A (en) * 2018-04-28 2018-11-06 中国科学院自动化研究所 Face critical point detection method and device
CN108876894A (en) * 2018-02-01 2018-11-23 北京旷视科技有限公司 Three-dimensional face model and three-dimensional headform's generation method and generating means
CN108898556A (en) * 2018-05-24 2018-11-27 麒麟合盛网络技术股份有限公司 A kind of image processing method and device of three-dimensional face
CN109035338A (en) * 2018-07-16 2018-12-18 深圳辰视智能科技有限公司 Point cloud and picture fusion method, device and its equipment based on single scale feature
CN109087379A (en) * 2018-08-09 2018-12-25 北京华捷艾米科技有限公司 The moving method of human face expression and the moving apparatus of human face expression
CN109255827A (en) * 2018-08-24 2019-01-22 太平洋未来科技(深圳)有限公司 Three-dimensional face images generation method, device and electronic equipment
CN109300114A (en) * 2018-08-30 2019-02-01 西南交通大学 The minimum target components of high iron catenary support device hold out against missing detection method
CN109299643A (en) * 2018-07-17 2019-02-01 深圳职业技术学院 A kind of face identification method and system based on big attitude tracking
CN109448007A (en) * 2018-11-02 2019-03-08 北京迈格威科技有限公司 Image processing method, image processing apparatus and storage medium
CN109726692A (en) * 2018-12-29 2019-05-07 重庆集诚汽车电子有限责任公司 High-definition camera 3D object detection system based on deep learning
CN109726613A (en) * 2017-10-27 2019-05-07 虹软科技股份有限公司 A kind of method and apparatus for detection
CN109902616A (en) * 2019-02-25 2019-06-18 清华大学 Face three-dimensional feature point detecting method and system based on deep learning
CN109934058A (en) * 2017-12-15 2019-06-25 北京市商汤科技开发有限公司 Face image processing process, device, electronic equipment, storage medium and program
CN109934196A (en) * 2019-03-21 2019-06-25 厦门美图之家科技有限公司 Human face posture parameter evaluation method, apparatus, electronic equipment and readable storage medium storing program for executing
CN110008873A (en) * 2019-04-25 2019-07-12 北京华捷艾米科技有限公司 Facial expression method for catching, system and equipment
CN110059602A (en) * 2019-04-10 2019-07-26 武汉大学 A kind of vertical view face antidote based on orthographic projection eigentransformation
CN110136243A (en) * 2019-04-09 2019-08-16 五邑大学 A kind of three-dimensional facial reconstruction method and its system, device, storage medium
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN110348406A (en) * 2019-07-15 2019-10-18 广州图普网络科技有限公司 Parameter deducing method and device
CN110705355A (en) * 2019-08-30 2020-01-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Face pose estimation method based on key point constraint
WO2020041934A1 (en) * 2018-08-27 2020-03-05 华为技术有限公司 Data processing device and data processing method
CN110866962A (en) * 2019-11-20 2020-03-06 成都威爱新经济技术研究院有限公司 Virtual portrait and expression synchronization method based on convolutional neural network
CN111062266A (en) * 2019-11-28 2020-04-24 东华理工大学 Face point cloud key point positioning method based on cylindrical coordinates
CN111145166A (en) * 2019-12-31 2020-05-12 北京深测科技有限公司 Safety monitoring method and system
CN111222469A (en) * 2020-01-09 2020-06-02 浙江工业大学 Coarse-to-fine human face posture quantitative estimation method
CN111401157A (en) * 2020-03-02 2020-07-10 中国电子科技集团公司第五十二研究所 Face recognition method and system based on three-dimensional features
CN111489435A (en) * 2020-03-31 2020-08-04 天津大学 Self-adaptive three-dimensional face reconstruction method based on single image
CN111819568A (en) * 2018-06-01 2020-10-23 华为技术有限公司 Method and device for generating face rotation image
CN111898552A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing person attention target object and computer equipment
CN112002014A (en) * 2020-08-31 2020-11-27 中国科学院自动化研究所 Three-dimensional face reconstruction method, system and device for fine structure
CN112215050A (en) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment
CN112307899A (en) * 2020-09-27 2021-02-02 中国科学院宁波材料技术与工程研究所 Facial posture detection and correction method and system based on deep learning
CN112800882A (en) * 2021-01-15 2021-05-14 南京航空航天大学 Mask face posture classification method based on weighted double-flow residual error network
CN113269862A (en) * 2021-05-31 2021-08-17 中国科学院自动化研究所 Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment
CN113643366A (en) * 2021-07-12 2021-11-12 中国科学院自动化研究所 Multi-view three-dimensional object attitude estimation method and device
CN113838134A (en) * 2021-09-26 2021-12-24 广州博冠信息科技有限公司 Image key point detection method, device, terminal and storage medium
CN113870227A (en) * 2021-09-29 2021-12-31 赛诺威盛科技(北京)股份有限公司 Medical positioning method and device based on pressure distribution, electronic equipment and storage medium
CN114125273A (en) * 2021-11-05 2022-03-01 维沃移动通信有限公司 Face focusing method and device and electronic equipment
CN114267065A (en) * 2021-12-23 2022-04-01 广州津虹网络传媒有限公司 Face key point correction method and device, equipment and medium thereof
CN114360031A (en) * 2022-03-15 2022-04-15 南京甄视智能科技有限公司 Head pose estimation method, computer device, and storage medium
WO2022089360A1 (en) * 2020-10-28 2022-05-05 广州虎牙科技有限公司 Face detection neural network and training method, face detection method, and storage medium
US11403819B2 (en) 2018-08-16 2022-08-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Three-dimensional model processing method, electronic device, and readable storage medium
CN111819568B (en) * 2018-06-01 2024-07-09 华为技术有限公司 Face rotation image generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081733A (en) * 2011-01-13 2011-06-01 西北工业大学 Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method
CN105005755A (en) * 2014-04-25 2015-10-28 北京邮电大学 Three-dimensional face identification method and system
CN105354531A (en) * 2015-09-22 2016-02-24 成都通甲优博科技有限责任公司 Marking method for facial key points
CN105678284A (en) * 2016-02-18 2016-06-15 浙江博天科技有限公司 Fixed-position human behavior analysis method
CN106022228A (en) * 2016-05-11 2016-10-12 东南大学 Three-dimensional face recognition method based on vertical and horizontal local binary pattern on the mesh

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081733A (en) * 2011-01-13 2011-06-01 西北工业大学 Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method
CN105005755A (en) * 2014-04-25 2015-10-28 北京邮电大学 Three-dimensional face identification method and system
CN105354531A (en) * 2015-09-22 2016-02-24 成都通甲优博科技有限责任公司 Marking method for facial key points
CN105678284A (en) * 2016-02-18 2016-06-15 浙江博天科技有限公司 Fixed-position human behavior analysis method
CN106022228A (en) * 2016-05-11 2016-10-12 东南大学 Three-dimensional face recognition method based on vertical and horizontal local binary pattern on the mesh

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANGYU ZHU等: "《Face Alignment Across Large Poses: A 3D Solution》", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729904A (en) * 2017-10-09 2018-02-23 广东工业大学 A kind of face pore matching process based on the limitation of 3 D deformation face
CN109726613A (en) * 2017-10-27 2019-05-07 虹软科技股份有限公司 A kind of method and apparatus for detection
CN109726613B (en) * 2017-10-27 2021-09-10 虹软科技股份有限公司 Method and device for detection
US11017557B2 (en) 2017-10-27 2021-05-25 Arcsoft Corporation Limited Detection method and device thereof
CN107944367A (en) * 2017-11-16 2018-04-20 北京小米移动软件有限公司 Face critical point detection method and device
CN107967454A (en) * 2017-11-24 2018-04-27 武汉理工大学 Take the two-way convolutional neural networks Classification in Remote Sensing Image method of spatial neighborhood relation into account
CN107967454B (en) * 2017-11-24 2021-10-15 武汉理工大学 Double-path convolution neural network remote sensing classification method considering spatial neighborhood relationship
CN108229313A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 Face identification method and device, electronic equipment and computer program and storage medium
CN108229313B (en) * 2017-11-28 2021-04-16 北京市商汤科技开发有限公司 Face recognition method and apparatus, electronic device, computer program, and storage medium
CN109934058B (en) * 2017-12-15 2021-08-06 北京市商汤科技开发有限公司 Face image processing method, face image processing device, electronic apparatus, storage medium, and program
CN109934058A (en) * 2017-12-15 2019-06-25 北京市商汤科技开发有限公司 Face image processing process, device, electronic equipment, storage medium and program
CN113688737A (en) * 2017-12-15 2021-11-23 北京市商汤科技开发有限公司 Face image processing method, face image processing device, electronic apparatus, storage medium, and program
CN108320274A (en) * 2018-01-26 2018-07-24 东华大学 It is a kind of to recycle the infrared video colorization method for generating confrontation network based on binary channels
CN108876894A (en) * 2018-02-01 2018-11-23 北京旷视科技有限公司 Three-dimensional face model and three-dimensional headform's generation method and generating means
CN108764048B (en) * 2018-04-28 2021-03-16 中国科学院自动化研究所 Face key point detection method and device
CN108764048A (en) * 2018-04-28 2018-11-06 中国科学院自动化研究所 Face critical point detection method and device
CN108898556A (en) * 2018-05-24 2018-11-27 麒麟合盛网络技术股份有限公司 A kind of image processing method and device of three-dimensional face
CN111819568A (en) * 2018-06-01 2020-10-23 华为技术有限公司 Method and device for generating face rotation image
CN111819568B (en) * 2018-06-01 2024-07-09 华为技术有限公司 Face rotation image generation method and device
CN109035338A (en) * 2018-07-16 2018-12-18 深圳辰视智能科技有限公司 Point cloud and picture fusion method, device and its equipment based on single scale feature
CN109035338B (en) * 2018-07-16 2020-11-10 深圳辰视智能科技有限公司 Point cloud and picture fusion method, device and equipment based on single-scale features
CN109299643B (en) * 2018-07-17 2020-04-14 深圳职业技术学院 Face recognition method and system based on large-posture alignment
CN109299643A (en) * 2018-07-17 2019-02-01 深圳职业技术学院 A kind of face identification method and system based on big attitude tracking
CN109087379A (en) * 2018-08-09 2018-12-25 北京华捷艾米科技有限公司 The moving method of human face expression and the moving apparatus of human face expression
US11403819B2 (en) 2018-08-16 2022-08-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Three-dimensional model processing method, electronic device, and readable storage medium
CN109255827A (en) * 2018-08-24 2019-01-22 太平洋未来科技(深圳)有限公司 Three-dimensional face images generation method, device and electronic equipment
WO2020037676A1 (en) * 2018-08-24 2020-02-27 太平洋未来科技(深圳)有限公司 Three-dimensional face image generation method and apparatus, and electronic device
WO2020041934A1 (en) * 2018-08-27 2020-03-05 华为技术有限公司 Data processing device and data processing method
CN109300114A (en) * 2018-08-30 2019-02-01 西南交通大学 The minimum target components of high iron catenary support device hold out against missing detection method
CN109448007A (en) * 2018-11-02 2019-03-08 北京迈格威科技有限公司 Image processing method, image processing apparatus and storage medium
CN109448007B (en) * 2018-11-02 2020-10-09 北京迈格威科技有限公司 Image processing method, image processing apparatus, and storage medium
CN109726692A (en) * 2018-12-29 2019-05-07 重庆集诚汽车电子有限责任公司 High-definition camera 3D object detection system based on deep learning
CN109902616B (en) * 2019-02-25 2020-12-01 清华大学 Human face three-dimensional feature point detection method and system based on deep learning
CN109902616A (en) * 2019-02-25 2019-06-18 清华大学 Face three-dimensional feature point detecting method and system based on deep learning
CN109934196A (en) * 2019-03-21 2019-06-25 厦门美图之家科技有限公司 Human face posture parameter evaluation method, apparatus, electronic equipment and readable storage medium storing program for executing
CN110136243B (en) * 2019-04-09 2023-03-17 五邑大学 Three-dimensional face reconstruction method, system, device and storage medium thereof
CN110136243A (en) * 2019-04-09 2019-08-16 五邑大学 A kind of three-dimensional facial reconstruction method and its system, device, storage medium
CN110059602B (en) * 2019-04-10 2022-03-15 武汉大学 Forward projection feature transformation-based overlook human face correction method
CN110059602A (en) * 2019-04-10 2019-07-26 武汉大学 A kind of vertical view face antidote based on orthographic projection eigentransformation
CN110008873A (en) * 2019-04-25 2019-07-12 北京华捷艾米科技有限公司 Facial expression method for catching, system and equipment
CN112215050A (en) * 2019-06-24 2021-01-12 北京眼神智能科技有限公司 Nonlinear 3DMM face reconstruction and posture normalization method, device, medium and equipment
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN110298319B (en) * 2019-07-01 2021-10-08 北京字节跳动网络技术有限公司 Image synthesis method and device
CN110348406A (en) * 2019-07-15 2019-10-18 广州图普网络科技有限公司 Parameter deducing method and device
CN110348406B (en) * 2019-07-15 2021-11-02 广州图普网络科技有限公司 Parameter estimation method and device
CN110705355A (en) * 2019-08-30 2020-01-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Face pose estimation method based on key point constraint
CN110866962B (en) * 2019-11-20 2023-06-16 成都威爱新经济技术研究院有限公司 Virtual portrait and expression synchronization method based on convolutional neural network
CN110866962A (en) * 2019-11-20 2020-03-06 成都威爱新经济技术研究院有限公司 Virtual portrait and expression synchronization method based on convolutional neural network
CN111062266B (en) * 2019-11-28 2022-07-15 东华理工大学 Face point cloud key point positioning method based on cylindrical coordinates
CN111062266A (en) * 2019-11-28 2020-04-24 东华理工大学 Face point cloud key point positioning method based on cylindrical coordinates
CN111145166A (en) * 2019-12-31 2020-05-12 北京深测科技有限公司 Safety monitoring method and system
CN111145166B (en) * 2019-12-31 2023-09-01 北京深测科技有限公司 Security monitoring method and system
CN111222469A (en) * 2020-01-09 2020-06-02 浙江工业大学 Coarse-to-fine human face posture quantitative estimation method
CN111401157A (en) * 2020-03-02 2020-07-10 中国电子科技集团公司第五十二研究所 Face recognition method and system based on three-dimensional features
CN111489435B (en) * 2020-03-31 2022-12-27 天津大学 Self-adaptive three-dimensional face reconstruction method based on single image
CN111489435A (en) * 2020-03-31 2020-08-04 天津大学 Self-adaptive three-dimensional face reconstruction method based on single image
CN111898552B (en) * 2020-07-31 2022-12-27 成都新潮传媒集团有限公司 Method and device for distinguishing person attention target object and computer equipment
CN111898552A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing person attention target object and computer equipment
CN112002014A (en) * 2020-08-31 2020-11-27 中国科学院自动化研究所 Three-dimensional face reconstruction method, system and device for fine structure
CN112002014B (en) * 2020-08-31 2023-12-15 中国科学院自动化研究所 Fine structure-oriented three-dimensional face reconstruction method, system and device
CN112307899A (en) * 2020-09-27 2021-02-02 中国科学院宁波材料技术与工程研究所 Facial posture detection and correction method and system based on deep learning
WO2022089360A1 (en) * 2020-10-28 2022-05-05 广州虎牙科技有限公司 Face detection neural network and training method, face detection method, and storage medium
CN112800882A (en) * 2021-01-15 2021-05-14 南京航空航天大学 Mask face posture classification method based on weighted double-flow residual error network
CN113269862A (en) * 2021-05-31 2021-08-17 中国科学院自动化研究所 Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment
CN113643366B (en) * 2021-07-12 2024-03-05 中国科学院自动化研究所 Multi-view three-dimensional object attitude estimation method and device
CN113643366A (en) * 2021-07-12 2021-11-12 中国科学院自动化研究所 Multi-view three-dimensional object attitude estimation method and device
CN113838134B (en) * 2021-09-26 2024-03-12 广州博冠信息科技有限公司 Image key point detection method, device, terminal and storage medium
CN113838134A (en) * 2021-09-26 2021-12-24 广州博冠信息科技有限公司 Image key point detection method, device, terminal and storage medium
CN113870227A (en) * 2021-09-29 2021-12-31 赛诺威盛科技(北京)股份有限公司 Medical positioning method and device based on pressure distribution, electronic equipment and storage medium
CN114125273B (en) * 2021-11-05 2023-04-07 维沃移动通信有限公司 Face focusing method and device and electronic equipment
CN114125273A (en) * 2021-11-05 2022-03-01 维沃移动通信有限公司 Face focusing method and device and electronic equipment
CN114267065A (en) * 2021-12-23 2022-04-01 广州津虹网络传媒有限公司 Face key point correction method and device, equipment and medium thereof
CN114360031B (en) * 2022-03-15 2022-06-21 南京甄视智能科技有限公司 Head pose estimation method, computer device, and storage medium
CN114360031A (en) * 2022-03-15 2022-04-15 南京甄视智能科技有限公司 Head pose estimation method, computer device, and storage medium

Also Published As

Publication number Publication date
CN107122705B (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN107122705A (en) Face critical point detection method based on three-dimensional face model
CN105856230B (en) A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN104933755B (en) A kind of stationary body method for reconstructing and system
CN105843223B (en) A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
CN104537709B (en) It is a kind of that method is determined based on the real-time three-dimensional reconstruction key frame that pose changes
CN102880866B (en) Method for extracting face features
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN112270249A (en) Target pose estimation method fusing RGB-D visual features
WO2023184968A1 (en) Structured scene visual slam method based on point line surface features
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN109671120A (en) A kind of monocular SLAM initial method and system based on wheel type encoder
CN102136142B (en) Nonrigid medical image registration method based on self-adapting triangular meshes
CN109544677A (en) Indoor scene main structure method for reconstructing and system based on depth image key frame
CN103106667A (en) Motion target tracing method towards shielding and scene change
CN111368673A (en) Method for quickly extracting human body key points based on neural network
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN102411794B (en) Output method of two-dimensional (2D) projection of three-dimensional (3D) model based on spherical harmonic transform
CN113256698B (en) Monocular 3D reconstruction method with depth prediction
CN107424161A (en) A kind of indoor scene image layout method of estimation by thick extremely essence
CN108537844A (en) A kind of vision SLAM winding detection methods of fusion geological information
CN107729806A (en) Single-view Pose-varied face recognition method based on three-dimensional facial reconstruction
CN107203759A (en) A kind of branch&#39;s recursion road restructing algorithm based on two view geometries
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN110458128A (en) A kind of posture feature acquisition methods, device, equipment and storage medium
CN103049921B (en) Method for determining image centroid of small irregular celestial body for deep space autonomous navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant