CN109359526A - A kind of face pose estimation, device and equipment - Google Patents

A kind of face pose estimation, device and equipment Download PDF

Info

Publication number
CN109359526A
CN109359526A CN201811054415.1A CN201811054415A CN109359526A CN 109359526 A CN109359526 A CN 109359526A CN 201811054415 A CN201811054415 A CN 201811054415A CN 109359526 A CN109359526 A CN 109359526A
Authority
CN
China
Prior art keywords
model
picture
face
training
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811054415.1A
Other languages
Chinese (zh)
Other versions
CN109359526B (en
Inventor
田劲东
张祖光
李晓宇
田勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201811054415.1A priority Critical patent/CN109359526B/en
Publication of CN109359526A publication Critical patent/CN109359526A/en
Application granted granted Critical
Publication of CN109359526B publication Critical patent/CN109359526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a kind of face pose estimation, device and equipment belong to and are related to image identification technical field, the method includes the steps: read picture to be estimated;According to the first model, the human face characteristic point of picture to be estimated is identified;According to the second model and the human face characteristic point, identify that the face of picture to be estimated waits for attitude angle.Through the invention, human face posture angular data can be directly generated in real time, and can continuously export predicted value, it can be widely applied to require the application field in real time, continuously judging human face posture, the training and test data set of human face posture are generated by this method, be can avoid artificial mark, are improved data set precision.Meanwhile for the convolutional neural networks face characteristic point model in the first model, new loss function is proposed, new loss function is more advantageous to Computing, accelerates the computational efficiency of model.

Description

A kind of face pose estimation, device and equipment
Technical field
The present invention relates to image identification technical fields, more particularly, to a kind of face pose estimation, device and equipment.
Background technique
Currently, human face modeling plays an important role in the fields such as recognition of face and human-computer interaction.Human face posture Variation will lead to face information loss and difference, so that the side face similarity of different people is than between the side face of the same person and positive face Similarity it is taller.It is chased in the practical applications such as criminal's system in the public arenas such as customs, airport, exhibition center and public security, when Preceding face recognition technology is restricted.Therefore, human face modeling is extremely important for Pose-varied face recognition.Except this it Outside, human face modeling is also suffered from terms of smart city, tired driver and is widely applied.
According to the difference of realization principle and mode, existing face pose estimation can generally be divided into six classes: (1) The matched method of shape template;(2) method of classifier is detected;(3) method that feature returns;(4) method of manifold insertion; (5) method of local restriction model;(6) method of facial feature points geometrical relationship.Wherein, the side of facial feature points geometrical relationship Method has many advantages, such as that simple, time-consuming is short, high-efficient, and the human face characteristic point extraction algorithm based on convolutional neural networks keeps face crucial The detection of characteristic point and positioning accuracy are protected.Attitude estimation based on feedforward neural network effectively avoids artificial model from estimating The problem of process morbid state.
Patent CN108197547A discloses a kind of face pose estimation, device, terminal and storage medium, based on residual Two disaggregated models of poor network struction, the first disaggregated model are used for rough sort, and the second disaggregated model is used for precise classification.It is existing Problem is that final output is classification results, when sample to be estimated is in categorised demarcation line section, exports result accuracy decline.
Patent CN105159452A discloses a kind of control method and system based on human face modeling, using quick face Portion's recognizer (such as using constraint partial model CLM) identification face, the disaggregated model of use can be active appearance model AAM (Active Appearance Model) marks the corresponding posture information of face by hand, then trains face gesture recognition Device.The problem is that step is complicated, precision is limited to active appearance model AAM algorithm, handmarking's error.
Summary of the invention
The present invention is directed to solve at least some of the technical problems in related technologies.For this purpose, of the invention One purpose is to provide one kind can carry out characteristic point and posture Angle Position mark, a kind of human face modeling side with high accuracy immediately Method and device.
The first aspect of the present invention be propose a kind of face pose estimation, comprising steps of
Read picture to be estimated;
According to the first model, the human face characteristic point of picture to be estimated is identified;
According to the second model and the human face characteristic point, identify that the face of picture to be estimated waits for attitude angle.
Further, after the step reads picture to be estimated, further includes: whether judge in the picture comprising people Face, if so, interception face parts of images, and truncated picture is normalized.
Further, the method also includes the first models of training, including,
Training set picture is pre-processed;
Carry out convolutional neural networks training.
Further, the method also includes training the second model, including step,
Generate pose estimation training set;
Carry out feedforward neural network training.
Further, the loss function of the use in the convolutional neural networks training are as follows:
Wherein, loss Representative errors, i are indicated the in data setiPicture, n indicate that data set shares n picture, and j is indicated J-th of characteristic point in image, jxIndicate thejIt is indexed under the x coordinate of a characteristic point, jyIt indicates to index under the y-coordinate of a characteristic point, x′i,jIndicate j-th of characteristic point coordinate estimated value of the i-th picture, xi,jIndicate j-th of characteristic point coordinate true value of the i-th picture,It indicates theiThe of picturejThe Euclidean distance of a facial feature estimation value and true value, L is the length for inputting convolutional neural networks face.
Further, using the loss function used in the feedforward neural network training are as follows:
Wherein, xi,jIndicate i-th of sample deflection j-th of Eulerian angles, i=0 ... n-1, j=1,2,3, n be sample number Mesh, x 'i,jIndicate the predicted value of j-th of Eulerian angles of i-th of sample deflection.
Further, the step generate pose estimation training set comprising steps of
Extract the characteristic point of picture to be estimated and the coordinate information of rotation center;
The characteristic point is projected in preset plane around rotation center rotation;
The angle value of attitude angle is calculated in preset plane.
Further, which is characterized in that
First model is convolutional neural networks model, including 1 input layer, 3 pairs of convolution pond layers, 1 convolutional layer, 2 full articulamentums;
Second model is BP network model, including 11 hidden layers and 1 output layer.
The second aspect of the invention is to propose a kind of equipment of human face modeling, including a kind of human face modeling Device, comprising:
Input module, for receiving picture to be estimated;
First model, the human face characteristic point of picture to be estimated, first model further include the first model out for identification Training module, for training first model,
First model training module includes:
Training set picture preprocessing module, for handling the picture in training set,
Convolutional neural networks module, for judging human face characteristic point;
Second model identifies that the face of picture to be estimated waits for attitude angle, and second model includes the second model training Module, for training second model,
Second model training module includes:
Pose estimation training set generation module, for training set needed for generating pose estimation,
Feedforward neural network module, for judging human face posture angle.
The third aspect of the invention is to propose a kind of equipment of human face modeling, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out above-mentioned method.
The third aspect of the invention is to propose a kind of computer readable storage medium, the computer-readable storage medium Matter is stored with computer executable instructions, and the computer executable instructions are for making computer execute above-mentioned method.
The beneficial effects of the present invention are:
The present invention uses the face pose estimation, can directly generate human face posture angular data in real time, and can connect Continuous output predicted value can be widely applied to require the application field in real time, continuously judging human face posture, generate people by this method The training and test data set of face posture can avoid artificial mark, improve data set precision.Meanwhile in the first model Convolutional neural networks face characteristic point model proposes new loss function, and new loss function is more advantageous to Computing, Accelerate the computational efficiency of model.
Detailed description of the invention
Fig. 1 is a kind of flow chart of one specific embodiment of face pose estimation in the present invention;
Fig. 2 is the human face characteristic point signal used in a kind of one specific embodiment of face pose estimation in the present invention Figure;
Fig. 3 is a kind of flow chart of face pose estimation still another embodiment in the present invention;
Fig. 4 is a kind of flow chart of the first model of training in one specific embodiment of face pose estimation in the present invention;
Fig. 5 be in the present invention in a kind of one specific embodiment of face pose estimation in the first model neural network knot Structure schematic diagram;
In Fig. 6 present invention in a kind of one specific embodiment of face pose estimation the second model of training flow chart;
In Fig. 7 present invention in a kind of one specific embodiment of face pose estimation the second model of training middle generation posture The flow chart of angular estimation training set;
Fig. 8 be in the present invention in a kind of one specific embodiment of face pose estimation in the second model neural network knot Structure schematic diagram;
Fig. 9 is the output result schematic diagram in the present invention in a kind of one specific embodiment of face pose estimation;
Figure 10 is a kind of structure chart of one specific embodiment of human face modeling device in the present invention;
Figure 11 is structure chart in a kind of one specific embodiment of human face modeling equipment in the present invention.
Specific embodiment
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.
As shown in Figure 1, Fig. 1 shows a kind of process of one specific embodiment of face pose estimation in the present invention Figure, includes the following steps:
S110 reads picture to be estimated;
The picture can be read from acquisition in real time in camera or from the picture being locally stored, and be carried out just to picture The processing of step, processing mode are mean filter.
S130 identifies the human face characteristic point of picture to be estimated according to the first model;
In the present embodiment, the first model of precondition is put the first model especially to face and is identified.
Human face characteristic point is different from the key point of normal image, and the key point of general pattern is usually in image local pixel At maximum or minimum value, at maximum or minimum gradation value, and there are certain Gradient Features.The characteristic point of face is to realize Manual definition It alright, mainly include face organ position or facial contour etc..The shape often defined has 5 dot shape models, 29 dots Shape model, 68 dot shape models etc., as shown in Fig. 2, choosing left eye 1, right eye 2, nose 3, the left corners of the mouth 4, You Zui in the present embodiment The data of this 5 characteristic point shapes of angle 5.
S150 identifies that the face of picture to be estimated waits for attitude angle according to the second model and the human face characteristic point.
The characteristic point data of face is input in the second model.Identify that the face of picture to be estimated waits for attitude angle, and Label is in picture.
In the present embodiment, the second model of precondition allows the second model to carry out the pose estimation of face.
In the present embodiment, human face characteristic point is determined by the first model, people is judged by the second model and human face characteristic point Face posture can in real time, continuously calculate the attitude angle data of face, in the application neck for requiring real-time, continuous judgement human face posture There is important application value in domain as tired driver drives monitoring etc..
As shown in figure 3, further include step S120 after step S110 in another embodiment, judge in picture whether There are face, specific detection method can be used Haar operator and carry out recognition of face.
If entering step S121 comprising face information, face parts of images is intercepted, and truncated picture is normalized Processing.
Face part figure piece is intercepted, and saves as input of the picture of 39*39 pixel as the first model and the second model Picture, while face is recorded in the position of original picture to be estimated.
It in the present embodiment, further include the first model of training, specific steps are as shown in Figure 4, comprising:
S210 pre-processes training set picture;
Can be used LFW face image data collection carry out face training, pretreatment include by data set image gray processing, By face portion intercepts and save as the picture of 39*39 pixel.
The position coordinates of human face characteristic point are marked in the image of training set, characteristic point is left eye, right eye, nose, Zuo Zui Angle, the right corners of the mouth.
S220 carries out convolutional neural networks training.
Convolutional neural networks are in the case where handling complex scene, such as in illumination, posture, expression, the property for blocking complex condition It can be preferably.
Convolutional neural networks structure is defined, the neural network used in the present embodiment is 7 layers of structure, including 1 input layer, 3 pairs of convolution pond layers, 1 convolutional layer, 2 full articulamentums, as shown in Figure 5.
The activation primitive of use are as follows:
Wherein m, n are the number of preceding layer and current layer neuron, and in input layer, m 1, n are first convolutional layer Neuron number is 20;When in output layer, m is 120, n 10.yjFor the output of the single neuron of current layer, which j indicates A neuron, j=0,1 ... n-1.xi,jFor the output of all neurons of preceding layer, i indicates which neuron of preceding layer, i =0,1 ... m-1.wjFor the weight coefficient of j-th of neuron of current layer, bjFor the constant coefficient of j-th of neuron of current layer.
The loss function of use are as follows:
Wherein, loss Representative errors, i indicate that the i-th picture in data set, n indicate that data set shares n picture, and j is indicated J-th of characteristic point in image, jxIt indicates to index under the x coordinate of j-th of characteristic point, jyIt indicates to index under the y-coordinate of a characteristic point, x′i,jIndicate j-th of characteristic point coordinate estimated value of the i-th picture, xi,jIndicate j-th of characteristic point coordinate true value of the i-th picture,It indicates theiThe Euclidean distance of j-th facial feature estimation value and true value of picture, L is the length for inputting convolutional neural networks face, by the introducing of l, so that the result of output is characterized and a little accounts for face and compare position Ratio value, effectively prevent different faces scale different problems.
For this loss function is compared with unknown losses function, more conducively machine arithmetic, operation efficiency is higher, and calculating speed is more Fastly.
It after first model training, 4600 step, is tested on data set, calculated root-mean-square error are as follows:
errtrainIndicate the root-mean-square error that model is estimated on training set, errtestIndicate model on test set The root-mean-square error estimated, face characteristic point estimation model do regression training, and estimated accuracy can be direct by loss function It provides.Since estimated result and true value are all the ratio value after normalization, all errtrain、errtestIt is seen by training result Out, mean square error of the convolutional neural networks on training set is 0.00037128685, and the mean square error on test set is 0.0004455708.As a result after restraining, the first model training is completed.
In the present embodiment, further include the second model of training, specific steps are as shown in Figure 6:
S310 generates pose estimation training set;
Human face posture data can Rational Simplification be, after face characteristic point model rotating close manages Eulerian angles, projection to default Plane generates.
Further, as shown in fig. 7, pose estimation training set generates, include the following steps:
S311 extracts the human face characteristic point of picture to be estimated and the coordinate information of rotation center.
Human face characteristic point can be calculated by the first model, and rotation center coordinate can be calculated by human head model, rotation Center and human face characteristic point meet geometry distribution constraint.
S312 projects the characteristic point in preset plane around rotation center rotation
Three axis that x-axis, y-axis, z-axis are rectangular coordinate systems are set, wherein z-axis be by the crown central point of people and with The straight line of horizontal plane, y-axis can be the straight line parallel with the line of two eyeball central points of people, and defining rotation center is Origin.Defining the angle rotated around x axis is roll angle (roll), and the angle rotated around y-axis is pitch angle (pitch), is revolved along z-axis The angle turned is deflection angle (yaw).It is that face planar process vector is directed toward yoz plane that original state, which is arranged,.Human face characteristic point is thrown On shadow to yoz plane.
S313 calculates the angle value of attitude angle in preset plane.
Projection by calculating the key point of human face characteristic point in yoz plane calculates attitude angle information.
In the present embodiment, symbiosis constitutes data set at 20000 data, wherein 18000 data composition training sets, 2000 A data form test set.
After generating pose estimation training set, S320 is entered step, carries out feedforward neural network training.As shown in figure 8, Fig. 7 is the neural network structure schematic diagram used in this implementation, altogether includes 11 hidden layers and 1 output layer, the activation of use Function are as follows:
M, n is the number of preceding layer and current layer neuron, and in input layer, m 1, n are first convolutional layer nerve First number is 28;When in output layer, m is 8, n 3.yjFor the output of the single neuron of current layer, which nerve j indicates Member, j=0,1 ... n-1.xi,jFor the output of all neurons of preceding layer, i indicates which neuron of preceding layer, i=0, 1,...m-1。wjFor the weight coefficient of j-th of neuron of current layer, bjFor the constant coefficient of j-th of neuron of current layer.
The loss function used for,
Loss function is defined as the mean square error between the number of people deflects in arbitrary sample any Eulerian angles and its estimated value, Err is loss function end value, by keeping loss function minimum, iterates and acquires BP network model parameter.xi,jTable Show i-th of sample deflection j-th of Eulerian angles, i=0 ... n-1, j=1,2,3.N is number of samples, i value since 0. x′i,jIndicate the predicted value of j-th of Eulerian angles of i-th of sample deflection.
Reach convergence after second model training, 4000 step, test result is as follows:
errtrainThe mean square error result for being feed forward models on training set is 0.2545835 °, errtestIn test set On mean square error result be 0.26021713 °.
In the present embodiment, as shown in figure 9, showing a kind of one specific embodiment of face pose estimation in the present invention In output result schematic diagram, (xR,yR) it is human face region upper left angular coordinate, wRIt is long for face regional edge, therefore, face area Domain can use (xR,yR,wR) indicate, it is that human face characteristic point is horizontal and vertical in face by the result that the first model exports Ratio value, by the region side length multiplied by face, in addition relative position of the face in entire picture, can calculate face spy Point is levied in the coordinate value of entire picture, specific calculation formula is as follows:
Wherein, (x 'i,y′i) indicate that the face coordinate points ratio value that the first model exports, the ratio value are point in face area Accounting in the R of domain, (xi,yi) be by calculating after, the seat in picture that the human face characteristic point obtained generates after normalized Scale value.
Specific result in the present embodiment are as follows:
The attitude angle of face can be marked directly in picture, specifically:
Deflection Roll (roll angle) Pitch (pitch angle) Yaw (yaw angle)
Angle value/degree 5.801024 -22.002823 0.9343903
In conclusion the present invention exports the characteristic point and posture information of face by the first model and the second model in real time, First model uses convolutional neural networks, proposes newly suitable for the recognition of face under complex environment, while for the first model Loss function, shorten the training time of the first model, the invention also discloses a kind of posture simplified calculation methods, with generation Pose estimation training set, the image characteristic point generated through the invention and attitude data collection avoid previous manual mark It is time-consuming and laborious, it can be further used for subsequent image recognition.
As shown in Figure 10, the invention also discloses a kind of devices, comprising:
Input module, for receiving picture to be estimated;
First model, the human face characteristic point of picture to be estimated, first model further include the first model out for identification Training module, for training first model,
First model training module includes:
Training set picture preprocessing module, for handling the picture in training set,
Convolutional neural networks module, for judging human face characteristic point;
Second model identifies that the face of picture to be estimated waits for attitude angle, and second model includes the second model training Module, for training second model,
Second model training module includes:
Pose estimation training set generation module, for training set needed for generating pose estimation,
Feedforward neural network module, for judging human face posture angle.
As shown in figure 11, the invention also discloses a kind of equipment of human face modeling, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out above-mentioned method.
Invention additionally discloses a kind of computer readable storage medium, the computer-readable recording medium storage has computer Executable instruction, the computer executable instructions are for making computer execute above-mentioned method
It is to be illustrated to preferable implementation of the invention, but the invention is not limited to the implementation above Example, those skilled in the art can also make various equivalent variations on the premise of without prejudice to spirit of the invention or replace It changes, these equivalent deformations or replacement are all included in the scope defined by the claims of the present application.

Claims (10)

1. a kind of face pose estimation, which is characterized in that the method includes the steps:
Read picture to be estimated;
According to the first model, the human face characteristic point of picture to be estimated is identified;
According to the second model and the human face characteristic point, identify that the face of picture to be estimated waits for attitude angle.
2. a kind of face pose estimation according to claim 1, which is characterized in that the step reads to be estimated After picture, further includes: judge whether comprising face in the picture, if so, intercepting face parts of images, and to the figure of interception As being normalized.
3. a kind of face pose estimation according to claim 1, which is characterized in that the method also includes training the One model, including step,
Training set picture is pre-processed;
Carry out convolutional neural networks training.
4. a kind of face pose estimation according to claim 1, which is characterized in that the method also includes training the Two models, including step,
Generate pose estimation training set;
Carry out feedforward neural network training.
5. a kind of face pose estimation according to claim 3, which is characterized in that the convolutional neural networks training In use loss function are as follows:
Wherein, loss Representative errors, i indicate that the i-th picture in data set, n indicate that data set shares n picture, and j indicates image In j-th of characteristic point, jxIt indicates to index under the x coordinate of j-th of characteristic point, jyIt indicates to index under the y-coordinate of a characteristic point, x 'i,j Indicate j-th of characteristic point coordinate estimated value of the i-th picture, xi,jIndicate j-th of characteristic point coordinate true value of the i-th picture,Indicate the Euclidean distance in j-th facial feature estimation value and true value of the i-th picture, L is the length for inputting convolutional neural networks face.
6. a kind of face pose estimation stated according to claim 4, which is characterized in that in the feedforward neural network training Using the loss function of use are as follows:
Wherein, xi,jIndicate i-th of sample deflection j-th of Eulerian angles, i=0 ... n-1, j=1,2,3, n be number of samples, x′i,jIndicate the predicted value of j-th of Eulerian angles of i-th of sample deflection.
7. a kind of face pose estimation stated according to claim 4, which is characterized in that the step generates pose estimation Training set comprising steps of
Extract the face characteristic of picture to be estimated and the coordinate information of rotation center;
The characteristic point is projected in preset plane around rotation center rotation;
The angle value of attitude angle is calculated in preset plane.
8. according to a kind of described in any item face pose estimations of right 1 to 4, which is characterized in that
First model is convolutional neural networks model, including 1 input layer, 3 pairs of convolution pond layers, 1 convolutional layer, 2 Full articulamentum;
Second model is BP network model, including 11 hidden layers and 1 output layer.
9. a kind of device of human face modeling characterized by comprising
Input module, for receiving picture to be estimated;
First model, the human face characteristic point of picture to be estimated, first model further include the first model training out for identification Module, for training first model,
First model training module includes:
Training set picture preprocessing module, for handling the picture in training set,
Convolutional neural networks module, for judging human face characteristic point;
Second model identifies that the face of picture to be estimated waits for attitude angle, and second model includes the second model training module, For training second model,
Second model training module includes:
Pose estimation training set generation module, for training set needed for generating pose estimation,
Feedforward neural network module, for judging human face posture angle.
10. a kind of equipment of human face modeling characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out method as claimed in any one of claims 1 to 8.
CN201811054415.1A 2018-09-11 2018-09-11 Human face posture estimation method, device and equipment Active CN109359526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811054415.1A CN109359526B (en) 2018-09-11 2018-09-11 Human face posture estimation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811054415.1A CN109359526B (en) 2018-09-11 2018-09-11 Human face posture estimation method, device and equipment

Publications (2)

Publication Number Publication Date
CN109359526A true CN109359526A (en) 2019-02-19
CN109359526B CN109359526B (en) 2022-09-27

Family

ID=65350791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811054415.1A Active CN109359526B (en) 2018-09-11 2018-09-11 Human face posture estimation method, device and equipment

Country Status (1)

Country Link
CN (1) CN109359526B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919727A (en) * 2019-03-12 2019-06-21 深圳市广德教育科技股份有限公司 A kind of 3D garment virtual ready-made clothes system
CN109961055A (en) * 2019-03-29 2019-07-02 广州市百果园信息技术有限公司 Face critical point detection method, apparatus, equipment and storage medium
CN110068326A (en) * 2019-04-29 2019-07-30 京东方科技集团股份有限公司 Computation method for attitude, device, electronic equipment and storage medium
CN110415323A (en) * 2019-07-30 2019-11-05 成都数字天空科技有限公司 A kind of fusion deformation coefficient preparation method, device and storage medium
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN110705355A (en) * 2019-08-30 2020-01-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Face pose estimation method based on key point constraint
CN110837773A (en) * 2019-09-27 2020-02-25 深圳市华付信息技术有限公司 Large-angle face pose estimation method based on deep learning
CN111061899A (en) * 2019-12-18 2020-04-24 深圳云天励飞技术有限公司 Archive representative picture generation method and device and electronic equipment
CN111178228A (en) * 2019-12-26 2020-05-19 中云智慧(北京)科技有限公司 Face recognition method based on deep learning
CN112101247A (en) * 2020-09-18 2020-12-18 济南博观智能科技有限公司 Face pose estimation method, device, equipment and storage medium
CN112634363A (en) * 2020-12-10 2021-04-09 上海零眸智能科技有限公司 Shelf attitude estimation method
CN112825145A (en) * 2019-11-20 2021-05-21 上海商汤智能科技有限公司 Human body orientation detection method and device, electronic equipment and computer storage medium
CN112949576A (en) * 2021-03-29 2021-06-11 北京京东方技术开发有限公司 Attitude estimation method, attitude estimation device, attitude estimation equipment and storage medium
CN113011401A (en) * 2021-04-30 2021-06-22 汇纳科技股份有限公司 Face image posture estimation and correction method, system, medium and electronic equipment
CN113034439A (en) * 2021-03-03 2021-06-25 北京交通大学 High-speed railway sound barrier defect detection method and device
CN112949576B (en) * 2021-03-29 2024-04-23 北京京东方技术开发有限公司 Attitude estimation method, apparatus, device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150348257A1 (en) * 2014-06-02 2015-12-03 Amrita Vishwa Vidyapeetham Systems and methods for yaw estimation
CN105159452A (en) * 2015-08-28 2015-12-16 成都通甲优博科技有限责任公司 Control method and system based on estimation of human face posture
CN105447462A (en) * 2015-11-20 2016-03-30 小米科技有限责任公司 Facial pose estimation method and device
CN106570460A (en) * 2016-10-20 2017-04-19 三明学院 Single-image human face posture estimation method based on depth value
CN107016708A (en) * 2017-03-24 2017-08-04 杭州电子科技大学 A kind of image Hash coding method based on deep learning
CN107609519A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The localization method and device of a kind of human face characteristic point
CN107729838A (en) * 2017-10-12 2018-02-23 中科视拓(北京)科技有限公司 A kind of head pose evaluation method based on deep learning
CN108038474A (en) * 2017-12-28 2018-05-15 深圳云天励飞技术有限公司 Method for detecting human face, the training method of convolutional neural networks parameter, device and medium
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150348257A1 (en) * 2014-06-02 2015-12-03 Amrita Vishwa Vidyapeetham Systems and methods for yaw estimation
CN105159452A (en) * 2015-08-28 2015-12-16 成都通甲优博科技有限责任公司 Control method and system based on estimation of human face posture
CN105447462A (en) * 2015-11-20 2016-03-30 小米科技有限责任公司 Facial pose estimation method and device
CN106570460A (en) * 2016-10-20 2017-04-19 三明学院 Single-image human face posture estimation method based on depth value
CN107016708A (en) * 2017-03-24 2017-08-04 杭州电子科技大学 A kind of image Hash coding method based on deep learning
CN107609519A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The localization method and device of a kind of human face characteristic point
CN107729838A (en) * 2017-10-12 2018-02-23 中科视拓(北京)科技有限公司 A kind of head pose evaluation method based on deep learning
CN108197547A (en) * 2017-12-26 2018-06-22 深圳云天励飞技术有限公司 Face pose estimation, device, terminal and storage medium
CN108038474A (en) * 2017-12-28 2018-05-15 深圳云天励飞技术有限公司 Method for detecting human face, the training method of convolutional neural networks parameter, device and medium

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919727A (en) * 2019-03-12 2019-06-21 深圳市广德教育科技股份有限公司 A kind of 3D garment virtual ready-made clothes system
CN109961055A (en) * 2019-03-29 2019-07-02 广州市百果园信息技术有限公司 Face critical point detection method, apparatus, equipment and storage medium
CN110068326B (en) * 2019-04-29 2021-11-30 京东方科技集团股份有限公司 Attitude calculation method and apparatus, electronic device, and storage medium
CN110068326A (en) * 2019-04-29 2019-07-30 京东方科技集团股份有限公司 Computation method for attitude, device, electronic equipment and storage medium
CN110415323A (en) * 2019-07-30 2019-11-05 成都数字天空科技有限公司 A kind of fusion deformation coefficient preparation method, device and storage medium
CN110415323B (en) * 2019-07-30 2023-05-26 成都数字天空科技有限公司 Fusion deformation coefficient obtaining method, fusion deformation coefficient obtaining device and storage medium
CN110705355A (en) * 2019-08-30 2020-01-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Face pose estimation method based on key point constraint
CN110837773A (en) * 2019-09-27 2020-02-25 深圳市华付信息技术有限公司 Large-angle face pose estimation method based on deep learning
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN110647865B (en) * 2019-09-30 2023-08-08 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN112825145B (en) * 2019-11-20 2022-08-23 上海商汤智能科技有限公司 Human body orientation detection method and device, electronic equipment and computer storage medium
CN112825145A (en) * 2019-11-20 2021-05-21 上海商汤智能科技有限公司 Human body orientation detection method and device, electronic equipment and computer storage medium
CN111061899B (en) * 2019-12-18 2022-04-26 深圳云天励飞技术股份有限公司 Archive representative picture generation method and device and electronic equipment
CN111061899A (en) * 2019-12-18 2020-04-24 深圳云天励飞技术有限公司 Archive representative picture generation method and device and electronic equipment
CN111178228A (en) * 2019-12-26 2020-05-19 中云智慧(北京)科技有限公司 Face recognition method based on deep learning
CN112101247A (en) * 2020-09-18 2020-12-18 济南博观智能科技有限公司 Face pose estimation method, device, equipment and storage medium
CN112101247B (en) * 2020-09-18 2024-02-27 济南博观智能科技有限公司 Face pose estimation method, device, equipment and storage medium
CN112634363A (en) * 2020-12-10 2021-04-09 上海零眸智能科技有限公司 Shelf attitude estimation method
CN112634363B (en) * 2020-12-10 2023-10-03 上海零眸智能科技有限公司 Goods shelf posture estimating method
CN113034439B (en) * 2021-03-03 2021-11-23 北京交通大学 High-speed railway sound barrier defect detection method and device
CN113034439A (en) * 2021-03-03 2021-06-25 北京交通大学 High-speed railway sound barrier defect detection method and device
CN112949576A (en) * 2021-03-29 2021-06-11 北京京东方技术开发有限公司 Attitude estimation method, attitude estimation device, attitude estimation equipment and storage medium
CN112949576B (en) * 2021-03-29 2024-04-23 北京京东方技术开发有限公司 Attitude estimation method, apparatus, device and storage medium
CN113011401B (en) * 2021-04-30 2023-03-21 汇纳科技股份有限公司 Face image posture estimation and correction method, system, medium and electronic equipment
CN113011401A (en) * 2021-04-30 2021-06-22 汇纳科技股份有限公司 Face image posture estimation and correction method, system, medium and electronic equipment

Also Published As

Publication number Publication date
CN109359526B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN109359526A (en) A kind of face pose estimation, device and equipment
Li et al. Morphable displacement field based image matching for face recognition across pose
CN108876879B (en) Method and device for realizing human face animation, computer equipment and storage medium
CN105138954B (en) A kind of image automatic screening inquiry identifying system
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
Ramanathan et al. Face verification across age progression
CN105447441B (en) Face authentication method and device
CN107506693B (en) Distort face image correcting method, device, computer equipment and storage medium
US8249310B2 (en) Image processing apparatus and method and program
CN108229330A (en) Face fusion recognition methods and device, electronic equipment and storage medium
CN108182397B (en) Multi-pose multi-scale human face verification method
Geetha et al. A vision based dynamic gesture recognition of indian sign language on kinect based depth images
CN106951840A (en) A kind of facial feature points detection method
Xia et al. Head pose estimation in the wild assisted by facial landmarks based on convolutional neural networks
CN107767335A (en) A kind of image interfusion method and system based on face recognition features' point location
TW201137768A (en) Face recognition apparatus and methods
CN110096925A (en) Enhancement Method, acquisition methods and the device of Facial Expression Image
CN111860055B (en) Face silence living body detection method, device, readable storage medium and equipment
CN109325408A (en) A kind of gesture judging method and storage medium
CN109409298A (en) A kind of Eye-controlling focus method based on video processing
Song et al. Robust 3D face landmark localization based on local coordinate coding
Lv et al. Nasal similarity measure of 3D faces based on curve shape space
Du High-precision portrait classification based on mtcnn and its application on similarity judgement
CN113705466B (en) Face five sense organ shielding detection method for shielding scene, especially under high imitation shielding
CN107977618A (en) A kind of face alignment method based on Cascaded Double-layer neutral net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant