CN108154550A - Face real-time three-dimensional method for reconstructing based on RGBD cameras - Google Patents
Face real-time three-dimensional method for reconstructing based on RGBD cameras Download PDFInfo
- Publication number
- CN108154550A CN108154550A CN201711229856.6A CN201711229856A CN108154550A CN 108154550 A CN108154550 A CN 108154550A CN 201711229856 A CN201711229856 A CN 201711229856A CN 108154550 A CN108154550 A CN 108154550A
- Authority
- CN
- China
- Prior art keywords
- face
- point
- image
- dimensional
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of face three-dimensional rebuilding method based on RGBD cameras, includes the following steps:S1:Acquire coloured image and depth image;S2:Obtain the first point set;S3:Solve head pose, identity coefficient, expression coefficient and the data offset for building texture image of people;S4:Current face's template and face 3-D geometric model is calculated;S5:Obtain newer face 3-D geometric model.Beneficial effects of the present invention are:Face three-dimensional rebuilding method based on RGBD cameras is to utilize the coloured image collected and depth image on the basis of face polyteny model and face mixing shape, further the data offset of structure texture image obtains face 3-D geometric model, this method calculation amount is small, easily realizes and calculates in real time;More fine geometric detail and texture modeling are realized using less amount of storage, are a kind of face three-dimensional rebuilding methods real-time, expression effect is good.
Description
Technical field
The present invention relates to technical field of computer vision more particularly to a kind of face real-time three-dimensional sides based on RGBD cameras
Method is rebuild.
Background technology
With the development of RGBD cameras and computer vision, three-dimensional reconstruction presents wide application prospect.Wherein,
Face geometry based on RGBD cameras and reflectivity are quickly reconstituted in application on Intelligent mobile equipment will be in consumer entertainment, dynamic
Drawing the fields such as making, remote collaboration and generating influences extensively.
There are mainly two types of modes based on RGBD cameras realization face three-dimensional reconstruction:First, based on face polyteny model and
The method for mixing shape;2nd, the general three-dimensional rebuilding method based on Volume.The former be to several face geometric templates into
Row linear combination obtains final threedimensional model, and the quantity of geometric templates determines the ability to express of model, therefore the mould of this method
Type is small, processing speed is fast, but ability to express is limited, and obtained three-dimensional reconstruction result lacks grain details.The latter is will be general
Three-dimensional rebuilding method based on Volume is applied in face three-dimensional reconstruction, and model structure is huge, it is difficult to accomplish real-time three-dimensional
It rebuilds;And this method reconstructs the threedimensional model come and lacks semantic information, for this pair with abundant semantic information of face
As rebuilding effect and being short of.
Currently without a kind of face three-dimensional rebuilding method real-time, expression effect is good, brought to face three-dimensional reconstruction all
It is mostly inconvenient.
Invention content
The present invention in order to solve in the prior art without it is a kind of in real time, expression effect good face three-dimensional rebuilding method
Problem provides a kind of face real-time three-dimensional method for reconstructing based on RGBD cameras.
To solve the above-mentioned problems, the technical solution adopted by the present invention is as described below:
The present invention provides a kind of face three-dimensional rebuilding method based on RGBD cameras, includes the following steps:S1:Acquisition is colored
Image and depth image:The coloured image and depth image of a frame front face are acquired using RGBD cameras;S2:Obtain at first point
Collection:The image coordinate of human face characteristic point is extracted in the coloured image, the image coordinate of the human face characteristic point is projected to
In the depth image, the three-dimensional coordinate of human face characteristic point is obtained, the point set of the three-dimensional coordinate of human face characteristic point is denoted as at first point
Collection;S3:Solve head pose, identity coefficient, expression coefficient and the data offset for building texture image of people:Utilize described
One point set and the head pose of depth image iterative solution people, and iteratively solve the identity coefficient in face polyteny model
With expression coefficient;The data offset of texture image is built, the data offset includes depth data offset and number of colours
According to offset;S4:Current face's template and face 3-D geometric model is calculated:Utilize face polyteny model and face
Current face's template is calculated in mixing shape;The data offset is updated, by current face's template and newer number
Face 3-D geometric model is calculated according to offset;S5:Obtain newer face 3-D geometric model:Continue to acquire next frame
Coloured image and depth image are handled successively using the method for step S2, S3, S4, obtain newer face three-dimensional geometry
Model.
Solved in step S3 the head pose of people, identity coefficient, expression coefficient method include:S31:From the face polyteny
Characteristic point serial number is obtained in model, the second point set corresponding with the first point set is obtained with this;S32:Initialization is when the head pose of forefathers;
S33:Build energy function:Epos=ω1E1+ω2E2, wherein
L0It is first point set;lkBe first point concentration feature point coordinates, l 'kIt is the coordinate that second point concentrates characteristic point;F0It is institute
It states with there is the set of the point of correspondence in face polyteny model in depth image, for third point set, fkIt is thirdly to concentrate
The three-dimensional coordinate of point;f′kIt is the three-dimensional coordinate of point corresponding with thirdly collecting in face polyteny template;R, t is the head of people
Attitude parameter, ω1And ω2It is the coefficient of two parts energy function;S34:Energy function is minimized using the method for iteration optimization
Epos, calculate the head pose parameter of people;S35:Energy function is again pulled up, identity system is calculated using iteration optimization algorithms
Number and expression coefficient.
The method that current face's template and face 3-D geometric model are calculated in step S4 includes:Face is multi-thread
Property model obtains the mixing shape of current face by identity coefficient weighted average, and the mixing shape of the face is
Mixing with common identity, the template of different expressions;Current face's template is calculated then in conjunction with expression coefficient.Work as forefathers
The calculation formula of face template is:Wherein B be current face's template, BiMixing shape mould for face
Type, wexp(i) it is expression coefficient vector wexpI-th of component.The calculation formula of depth data offset is: Wherein Bump (u, v) is
The offset of depth data, Color (u, v) are the offset of color data, and initial value is that 0, Weight (u, v) is the people
The depth data of face characteristic point and/or the confidence level of color data, V (u, v) are the three-dimensional coordinate of point thirdly concentrated,
N (u, v) is the normal vector of V (u, v), and V ' (u, v) is the three-dimensional coordinate for the point that N (u, v) is projected on depth image, enables p=V '
(u, v)-V (u, v), C ' are color data of the V ' (u, v) on the coloured image.The coordinate at face 3-D geometric model midpoint
Calculation formula is:M (u, v)=V (u, v)+Bump (u, v) N (u, v).
Beneficial effects of the present invention are:Face three-dimensional rebuilding method based on RGBD cameras is to utilize the colour collected
Image and depth image further build texture image on the basis of face polyteny model and face mixing shape
Data offset obtains face 3-D geometric model, and this method calculation amount is small, easily realizes and calculates in real time;Using less amount of storage
It realizes more fine geometric detail and texture modeling, is a kind of face three-dimensional rebuilding method real-time, expression effect is good.
Description of the drawings
Fig. 1 is the face three-dimensional rebuilding method schematic diagram based on RGBD cameras in the embodiment of the present invention.
Fig. 2 is the method schematic diagram of the head pose that people is solved in the embodiment of the present invention, identity coefficient and expression coefficient.
Specific embodiment
The present invention is described in detail by specific embodiment below in conjunction with the accompanying drawings, for a better understanding of this hair
It is bright, but following embodiments are not intended to limit the scope of the invention.In addition, it is necessary to illustrate, the diagram provided in following embodiments
Only illustrate the basic conception of the present invention in a schematic way, the component related with the present invention is only shown in attached drawing rather than according to reality
Component count, shape during implementation and size are drawn, during actual implementation shape, quantity and the ratio of each component can be it is a kind of with
The change of meaning, and its assembly layout form may also be increasingly complex.
Embodiment 1
The development of face three-dimensional reconstruction
The three-dimensional reconstruction of real-time face be one it is very important study a question, this research field, which has, widely should
With scene, such as intelligent monitoring, robot, human-computer interaction etc..With technologies such as computer vision, image procossing and its study
Development, more and more facial reconstruction methods are suggested.Common video camera can provide abundant color and texture
Information, this is highly useful for face three-dimensional reconstruction, but its data captured is highly prone to the influence of illumination variation, simultaneously
In certain complex scenes, background may be highly similar with the texture of prospect.Compared with traditional two dimensional gray or coloured image,
Depth image can provide range information of the reference object to camera.Utilize the depth value and camera internal of each location of pixels
Parameter information can easily carry out reference object shape and dimensioning, meanwhile, because depth data receives surface texture
Influence it is less, so Target Segmentation, background shear the problems such as become to be more prone to.
There are mainly three types of forms for depth camera at present:Depth camera based on binocular vision, the depth phase based on structure light
Machine and the depth camera based on TOF (time flight method).It is briefly described below, no matter which kind of form may be used in
In embodiment.
Depth camera based on binocular vision is using binocular vision technology, utilizes two cameras pair for being in different visual angles
The same space is taken pictures, the difference of pixel where same object and the depth where the object in the image that two cameras are shot
It spends directly related, thus depth information is obtained by calculating pixel deviations by image processing techniques.
Based on the depth camera of structure light by projecting coding structure light pattern to object space, then mesh is acquired by camera
The image of structured light patterns is contained in mark space, then handles the image and such as carries out matching meter with reference configuration light image
Calculation etc. can directly obtain depth information.
Based on the depth camera of TOF by emitting laser pulse to object space, laser pulse is connect after target reflects
The turnaround time of laser pulse is received after unit receives and recorded, the depth information of target is calculated by the time.These three
The generally acquisition color camera of the first in method, thus it is illuminated by the light influence greatly, while the calculation amount for obtaining depth information is larger.Afterwards
Two kinds, generally using infrared light, are not illuminated by the light influence, while calculation amount is relatively small.Current double cameras utilize VCSEL chips
Structure light, TOF camera as light source can be embedded into the equipment such as mechanical, electrical brain in one's hands.
Microsoft has issued Kinect imaging devices, and the equipment is by depth transducer (being based on structure light principle) and RGB
Video camera two parts form.The picking rate of the equipment can reach 30 frames/second, resolution ratio 640*480, depth data
Effective range is about that the depth data accuracy rate between 0.5-10 meters, wherein 0.8-3.5 meters is higher.Other RGBD cameras, than
Kinect Xbox 360, Kinect One, Xtion and Orbbec such as also occurs therewith.
For face three-dimensional reconstruction, the reconstruction technique based on RGBD cameras has following advantage:RGBD cameras can be quick
Obtain depth information;RGBD cameras are a kind of active sensors, are not easy to be interfered by ambient visible light spectrum;The operation of RGBD cameras
It is similar with common camera, it is easy to use.At present, by RGBD camera applications in object modeling, facing challenges mainly have:1) such as
What rapidly reconstructs the complete threedimensional model of face with RGBD cameras;2) reconstructing system is needed to RGBD cameras from each visual angle
The three-dimensional point cloud of acquisition is registrated, and when three-dimensional point cloud is registrated failure or registration result error is larger, how to ensure to rebuild system
System can correctly be run, and obtain complete object model;3) for having how the face blocked reconstructs more complete object
Model;4) precision of RGBD camera human face rebuildings etc. how is systematically assessed.Face three-dimensional reconstruction is realized based on RGBD cameras
There are mainly two types of modes:First, based on face polyteny model and the method for mixing shape;2nd, based on the general of Volume
Three-dimensional rebuilding method.The former is to carry out linear combination to several face geometric templates to obtain final threedimensional model, geometric templates
Quantity determines the ability to express of model, therefore the model of this method is small, processing speed is fast, but ability to express is limited, obtains
Three-dimensional reconstruction result lacks grain details.The latter is that the general three-dimensional rebuilding method based on Volume is applied to face three-dimensional
In reconstruction, model structure is huge, it is difficult to accomplish that real-time three-dimensional is rebuild;And this method reconstructs the threedimensional model come and lacks semanteme
Information for this object with abundant semantic information of face, is rebuild effect and is short of.
The method of face three-dimensional reconstruction
The present invention in order to solve in the prior art without it is a kind of in real time, expression effect good face three-dimensional rebuilding method
Problem provides a kind of face real-time three-dimensional method for reconstructing based on RGBD cameras.
As shown in Figure 1,
A kind of face three-dimensional rebuilding method based on RGBD cameras, includes the following steps:
(1) coloured image and depth image are acquired:The coloured image C of a frame neutrality front face is acquired using Kinect0
With depth image D0;
(2) the first point set is obtained:In coloured image C0Middle extraction human face characteristic point (landmark), and by the figure of characteristic point
As coordinate projection to depth image D0In, the three-dimensional coordinate of characteristic point is obtained, and the corresponding three-dimensional point set of human face characteristic point is remembered
For the first point set L0;
(3) head pose, identity coefficient, expression coefficient and the data offset for building texture image of people is solved:It utilizes
First point set L0With depth image D0The head pose [R, t] of people is iteratively solved, and iteratively solves the body in face polyteny model
Part coefficient widWith expression coefficient wexp;The data offset of texture image is built, data offset includes depth data offset
Bump (u, v) and color data offset Color (u, v);
(4) current face's template and face 3-D geometric model is calculated:Utilize face polyteny model and face
Current face's template B is calculated in mixing shape;Offset is updated the data, the data offset is updated and includes update deeply
Degrees of data offset and/or update color data offset;It is calculated by current face's template B and newer data offset
Face 3-D geometric model M;
(5) newer face 3-D geometric model is obtained:For each new collected coloured image CtAnd Dt(wherein t >=
1), the method extraction face three-dimensional feature point set L in similar step (2)t;Similar step (3) utilizes the method meter of iteration optimization
Current face's posture [R, t] is calculated, and estimates combined expressions coefficient wexp;Similar step (4), update Bump (u, v) and color (u,
v);Final face 3-D geometric model M is updated using Bump (u, v) and color (u, v).
As shown in Fig. 2, the method for solving the head pose of people, identity coefficient and expression coefficient includes the following steps:
(1) characteristic point serial number is obtained from face polyteny model, is obtained and the first point set L with this0Corresponding second point
Collect LT;
(2) head pose [R, t]=[I, 0] of forefathers is worked as in initialization, and wherein I is unit matrix, and 0 is null vector;
(3) energy function is built:Epos=ω1E1+ω2E2, wherein L0It is the first point set;lkIt is the first point set L0In feature point coordinates, l 'kIt is the second point set LT
The coordinate of middle characteristic point;F0It is with there is the set of the point of correspondence in face polyteny model in depth image, for thirdly
Collection, fkIt is third point set F0The three-dimensional coordinate at midpoint;f′kIt is the three-dimensional of point corresponding with thirdly collecting in face polyteny template
Coordinate;R, t is the head pose parameter of people, ω1And ω2It is the coefficient of two parts energy function;
(4) energy function E is minimized using the method for iteration optimizationpos, calculate the head pose parameter [R, t] of people;
(5) energy function is again pulled up, identity coefficient w is calculated using iteration optimization algorithmsidWith expression coefficient wexp。
The method that current face's template and face 3-D geometric model are calculated in step S4 includes:Face is multi-thread
Property model passes through identity coefficient widWeighted average obtains the mixing shape { Bi } of current face{1≤i≤N}, the mixing shape of face
Shape model { Bi}{1≤i≤N}For the mixing with common identity, the template of different expressions;Then in conjunction with expression coefficient wexpIt calculates
To current face's template B.The calculation formula of current face's template B is:Wherein B is current face's mould
Plate, mixing shapes of the Bi for face, wexp(i) it is expression coefficient vector WexpI-th of component.Depth data offset
Calculation formula is:Its
Middle Bump (u, v) is the offset of depth data, and Color (u, v) is the offset of color data, and initial value is 0, Weight
(u, v) is the depth data of human face characteristic point and/or the confidence level of color data, and V (u, v) is the three-dimensional of point thirdly concentrated
Coordinate, N (u, v) are the normal vector of V (u, v), and V ' (u, v) is the three-dimensional coordinate for the point that N (u, v) is projected on depth image, enables p
=V ' (u, v)-V (u, v), C ' are color data of the V ' (u, v) on coloured image.V (u, v) is the face polyteny model
The coordinate of middle grid vertex is obtained by mesh vertex coordinates interpolation.
The coordinate calculation formula at face 3-D geometric model midpoint is:M (u, v)==V (u, v)+Bump (u, v) N (u,
v)。
In the alternative embodiments of the present invention, human face characteristic point extraction and tracking include but not limited to cascade nature and carry
The extraction of the human face characteristic points such as device, end-to-end neural network and tracking, the characteristic point quantity extracted is taken to include but not limited to
68 points, the common human face characteristic point quantity such as 74 points.
In the alternative embodiments of the present invention, RGBD cameras include but not limited to Kinect Xbox 360, Kinect
One, Xtion, Orbbec etc. can acquire the sensor of coloured image and depth image simultaneously.
The method of the present invention is in face polyteny model and face using the coloured image and depth image collected
On the basis of mixing shape, the data offset for further building texture image obtains face 3-D geometric model, the party
Method calculation amount is small, easily realizes and calculates in real time;More fine geometric detail and texture modeling are realized using less amount of storage, are
A kind of face three-dimensional rebuilding method real-time, expression effect is good.And equipment of the Without wishing described in the present invention, other can be same
When acquisition coloured image and the equipment that can realize the method for the present invention of depth image can also;The present invention obtain it is real-time,
Expression effect good person's face 3-D geometric model can there are many face three-dimensional geometry moulds that application, the method based on the present invention obtain
The further application of type should also belong to invention which is intended to be protected.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, it is impossible to assert
The specific implementation of the present invention is confined to these explanations.For those skilled in the art to which the present invention belongs, it is not taking off
Under the premise of from present inventive concept, several equivalent substitute or obvious modifications can also be made, and performance or use is identical, all should
When being considered as belonging to protection scope of the present invention.
Claims (10)
1. a kind of face three-dimensional rebuilding method based on RGBD cameras, which is characterized in that include the following steps:
S1:Acquire coloured image and depth image:The coloured image and depth map of a frame front face are acquired using RGBD cameras
Picture;
S2:Obtain the first point set:The image coordinate of human face characteristic point is extracted in the coloured image, by the human face characteristic point
Image coordinate project in the depth image, obtain the three-dimensional coordinate of human face characteristic point, the three-dimensional coordinate of human face characteristic point
Point set be denoted as the first point set;
S3:Solve head pose, identity coefficient, expression coefficient and the data offset for building texture image of people:Using described
First point set and the head pose of depth image iterative solution people, and iteratively solve the identity system in face polyteny model
Number and expression coefficient;The data offset of texture image is built, the data offset includes depth data offset and color
Data offset;
S4:Current face's template and face 3-D geometric model is calculated:It is mixed using face polyteny model and face
Current face's template is calculated in shape;The data offset is updated, it is inclined by current face's template and newer data
Face 3-D geometric model is calculated in shifting amount;
S5:Obtain newer face 3-D geometric model:Continue to acquire next color image frame and depth image, utilize step successively
The method of rapid S2, S3, S4 are handled, and obtain newer face 3-D geometric model.
2. the face three-dimensional rebuilding method based on RGBD cameras as described in claim 1, which is characterized in that described in step S1
RGBD cameras include acquiring simultaneously the Kinect Xbox 360 of coloured image and depth image, Kinect One, Xtion and
Orbbec。
3. the face three-dimensional rebuilding method based on RGBD cameras as described in claim 1, which is characterized in that extracted in step S2
Face characteristic point extracting method of the method for human face characteristic point including cascade nature extractor, end-to-end neural network, is extracted
Human face characteristic point quantity be 68 points or 74 points.
4. the face three-dimensional rebuilding method based on RGBD cameras as described in claim 1, which is characterized in that step S3 is included such as
Lower step:
S31:Characteristic point serial number is obtained from the face polyteny model, second point corresponding with the first point set is obtained with this
Collection;
S32:Initialization is when the head pose of forefathers;
S33:Build energy function:Epos=ω1E1+ω2E2, wherein
L0It is first point set;lkBe first point concentration feature point coordinates, l 'kIt is the coordinate that second point concentrates characteristic point;F0It is institute
It states with there is the set of the point of correspondence in face polyteny model in depth image, for third point set, fkIt is thirdly to concentrate
The three-dimensional coordinate of point;fk' be point corresponding with thirdly collecting in face polyteny template three-dimensional coordinate;R, t is the head of people
Attitude parameter, ω1And ω2It is the coefficient of two parts energy function;
S34:Energy function E is minimized using the method for iteration optimizationpos, calculate the head pose parameter of people;
S35:Energy function is again pulled up, identity coefficient and expression coefficient are calculated using iteration optimization algorithms.
5. the face three-dimensional rebuilding method based on RGBD cameras as described in claim 1, which is characterized in that described in step S4
Current face's template is obtained to include the following steps:Face polyteny model is obtained into current face by identity coefficient weighted average
Mixing shape, the mixing shape of the face is the mixing of the template with common identity, different expression;Then
Current face's template is calculated with reference to expression coefficient.
6. the face three-dimensional rebuilding method based on RGBD cameras as described in claim 1, which is characterized in that described in step S4
The calculation formula for obtaining current face's template is:Wherein B be current face's template, BiFor the mixed of face
Close shape, wexp(i) it is expression coefficient vector wexpI-th of component.
7. the face three-dimensional rebuilding method based on RGBD cameras as described in claim 1, which is characterized in that described in step S4
It updates the data offset and includes update depth data offset and/or update color data offset.
8. the face three-dimensional rebuilding method based on RGBD cameras as claimed in claim 4, which is characterized in that depth data deviates
The calculation formula of amount is:
Wherein Bump (u, v) is the offset of depth data, and Color (u, v) is the offset of color data, and initial value is 0,
Weight (u, v) is the depth data of the human face characteristic point and/or the confidence level of color data, V (u, v) for it is described thirdly
The three-dimensional coordinate of the point of concentration, N (u, v) are the normal vector of V (u, v), and V ' (u, v) is the point that N (u, v) is projected on depth image
Three-dimensional coordinate, enable p=V ' (u, v)-V (u, v), C ' is color data of the V ' (u, v) on the coloured image.
9. the face three-dimensional rebuilding method based on RGBD cameras as claimed in claim 8, which is characterized in that the V (u, v) is
It the coordinate of grid vertex or is obtained in the face polyteny model by mesh vertex coordinates interpolation.
10. the face three-dimensional rebuilding method based on RGBD cameras as claimed in claim 8, which is characterized in that face three-dimensional is several
How the coordinate calculation formula at model midpoint is:M (u, v)=V (u, v)+Bump (u, v) N (u, v).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711229856.6A CN108154550B (en) | 2017-11-29 | 2017-11-29 | RGBD camera-based real-time three-dimensional face reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711229856.6A CN108154550B (en) | 2017-11-29 | 2017-11-29 | RGBD camera-based real-time three-dimensional face reconstruction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108154550A true CN108154550A (en) | 2018-06-12 |
CN108154550B CN108154550B (en) | 2021-07-06 |
Family
ID=62469235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711229856.6A Active CN108154550B (en) | 2017-11-29 | 2017-11-29 | RGBD camera-based real-time three-dimensional face reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108154550B (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921926A (en) * | 2018-07-02 | 2018-11-30 | 广州云从信息科技有限公司 | A kind of end-to-end three-dimensional facial reconstruction method based on single image |
CN109147037A (en) * | 2018-08-16 | 2019-01-04 | Oppo广东移动通信有限公司 | Effect processing method, device and electronic equipment based on threedimensional model |
CN109615688A (en) * | 2018-10-23 | 2019-04-12 | 杭州趣维科技有限公司 | Real-time face three-dimensional reconstruction system and method in a kind of mobile device |
CN109887076A (en) * | 2019-02-25 | 2019-06-14 | 清华大学 | According to the human face three-dimensional model method for building up and device of visual angle change |
CN110335343A (en) * | 2019-06-13 | 2019-10-15 | 清华大学 | Based on RGBD single-view image human body three-dimensional method for reconstructing and device |
CN110363858A (en) * | 2019-06-18 | 2019-10-22 | 新拓三维技术(深圳)有限公司 | A kind of three-dimensional facial reconstruction method and system |
CN110400369A (en) * | 2019-06-21 | 2019-11-01 | 苏州狗尾草智能科技有限公司 | A kind of method of human face rebuilding, system platform and storage medium |
CN110533773A (en) * | 2019-09-02 | 2019-12-03 | 北京华捷艾米科技有限公司 | A kind of three-dimensional facial reconstruction method, device and relevant device |
CN110689625A (en) * | 2019-09-06 | 2020-01-14 | 清华大学 | Automatic generation method and device for customized face mixed expression model |
WO2020063986A1 (en) * | 2018-09-30 | 2020-04-02 | 先临三维科技股份有限公司 | Method and apparatus for generating three-dimensional model, device, and storage medium |
CN111105881A (en) * | 2019-12-26 | 2020-05-05 | 昆山杜克大学 | Database system for 3D measurement of human phenotype |
CN111369651A (en) * | 2018-12-25 | 2020-07-03 | 浙江舜宇智能光学技术有限公司 | Three-dimensional expression animation generation method and system |
CN111462204A (en) * | 2020-02-13 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Virtual model generation method, virtual model generation device, storage medium, and electronic device |
CN111968235A (en) * | 2020-07-08 | 2020-11-20 | 杭州易现先进科技有限公司 | Object attitude estimation method, device and system and computer equipment |
CN112597901A (en) * | 2020-12-23 | 2021-04-02 | 艾体威尔电子技术(北京)有限公司 | Multi-face scene effective face recognition device and method based on three-dimensional distance measurement |
WO2021093453A1 (en) * | 2019-11-15 | 2021-05-20 | 腾讯科技(深圳)有限公司 | Method for generating 3d expression base, voice interactive method, apparatus and medium |
CN113421292A (en) * | 2021-06-25 | 2021-09-21 | 北京华捷艾米科技有限公司 | Three-dimensional modeling detail enhancement method and device |
CN113763559A (en) * | 2021-07-01 | 2021-12-07 | 清华大学 | Geometric motion detail reconstruction method and device for fitting depth image |
CN114049464A (en) * | 2021-11-15 | 2022-02-15 | 聚好看科技股份有限公司 | Reconstruction method and device of three-dimensional model |
CN114445514A (en) * | 2022-01-26 | 2022-05-06 | 四川大学 | Template data generation and application method based on magnetic resonance scanning image |
CN114863506A (en) * | 2022-03-18 | 2022-08-05 | 珠海优特电力科技股份有限公司 | Method, device and system for verifying access permission and identity authentication terminal |
CN116664746A (en) * | 2023-05-29 | 2023-08-29 | 华院计算技术(上海)股份有限公司 | Face reconstruction method and device, computer readable storage medium and terminal |
CN116704587A (en) * | 2023-08-02 | 2023-09-05 | 山东建筑大学 | Multi-person head pose estimation method and system integrating texture information and depth information |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070236501A1 (en) * | 2006-04-05 | 2007-10-11 | Ig-Jae Kim | Method for generating intuitive quasi-eigen paces |
CN101593365A (en) * | 2009-06-19 | 2009-12-02 | 电子科技大学 | A kind of method of adjustment of universal three-dimensional human face model |
US20100135541A1 (en) * | 2008-12-02 | 2010-06-03 | Shang-Hong Lai | Face recognition method |
US20120183238A1 (en) * | 2010-07-19 | 2012-07-19 | Carnegie Mellon University | Rapid 3D Face Reconstruction From a 2D Image and Methods Using Such Rapid 3D Face Reconstruction |
CN105719326A (en) * | 2016-01-19 | 2016-06-29 | 华中师范大学 | Realistic face generating method based on single photo |
CN106023288A (en) * | 2016-05-18 | 2016-10-12 | 浙江大学 | Image-based dynamic substitute construction method |
CN106327571A (en) * | 2016-08-23 | 2017-01-11 | 北京的卢深视科技有限公司 | Three-dimensional face modeling method and three-dimensional face modeling device |
CN107358648A (en) * | 2017-07-17 | 2017-11-17 | 中国科学技术大学 | Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image |
-
2017
- 2017-11-29 CN CN201711229856.6A patent/CN108154550B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070236501A1 (en) * | 2006-04-05 | 2007-10-11 | Ig-Jae Kim | Method for generating intuitive quasi-eigen paces |
US20100135541A1 (en) * | 2008-12-02 | 2010-06-03 | Shang-Hong Lai | Face recognition method |
CN101593365A (en) * | 2009-06-19 | 2009-12-02 | 电子科技大学 | A kind of method of adjustment of universal three-dimensional human face model |
US20120183238A1 (en) * | 2010-07-19 | 2012-07-19 | Carnegie Mellon University | Rapid 3D Face Reconstruction From a 2D Image and Methods Using Such Rapid 3D Face Reconstruction |
CN105719326A (en) * | 2016-01-19 | 2016-06-29 | 华中师范大学 | Realistic face generating method based on single photo |
CN106023288A (en) * | 2016-05-18 | 2016-10-12 | 浙江大学 | Image-based dynamic substitute construction method |
CN106327571A (en) * | 2016-08-23 | 2017-01-11 | 北京的卢深视科技有限公司 | Three-dimensional face modeling method and three-dimensional face modeling device |
CN107358648A (en) * | 2017-07-17 | 2017-11-17 | 中国科学技术大学 | Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image |
Non-Patent Citations (2)
Title |
---|
CHEN CAO ET AL.: "FaceWarehouse: A 3D Facial Expression Database for Visual Computing", 《IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS》 * |
王涵等: "单张图片自动重建带几何细节的人脸形状", 《计算机辅助设计与图形学学报》 * |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921926A (en) * | 2018-07-02 | 2018-11-30 | 广州云从信息科技有限公司 | A kind of end-to-end three-dimensional facial reconstruction method based on single image |
CN108921926B (en) * | 2018-07-02 | 2020-10-09 | 云从科技集团股份有限公司 | End-to-end three-dimensional face reconstruction method based on single image |
WO2020034698A1 (en) * | 2018-08-16 | 2020-02-20 | Oppo广东移动通信有限公司 | Three-dimensional model-based special effect processing method and device, and electronic apparatus |
CN109147037A (en) * | 2018-08-16 | 2019-01-04 | Oppo广东移动通信有限公司 | Effect processing method, device and electronic equipment based on threedimensional model |
US11978157B2 (en) | 2018-09-30 | 2024-05-07 | Shining 3D Tech Co., Ltd. | Method and apparatus for generating three-dimensional model, device, and storage medium |
WO2020063986A1 (en) * | 2018-09-30 | 2020-04-02 | 先临三维科技股份有限公司 | Method and apparatus for generating three-dimensional model, device, and storage medium |
CN109615688B (en) * | 2018-10-23 | 2023-06-23 | 杭州小影创新科技股份有限公司 | Real-time face three-dimensional reconstruction system and method on mobile equipment |
CN109615688A (en) * | 2018-10-23 | 2019-04-12 | 杭州趣维科技有限公司 | Real-time face three-dimensional reconstruction system and method in a kind of mobile device |
CN111369651A (en) * | 2018-12-25 | 2020-07-03 | 浙江舜宇智能光学技术有限公司 | Three-dimensional expression animation generation method and system |
CN109887076A (en) * | 2019-02-25 | 2019-06-14 | 清华大学 | According to the human face three-dimensional model method for building up and device of visual angle change |
CN109887076B (en) * | 2019-02-25 | 2021-02-12 | 清华大学 | Method and device for establishing three-dimensional model of human face according to visual angle change |
CN110335343A (en) * | 2019-06-13 | 2019-10-15 | 清华大学 | Based on RGBD single-view image human body three-dimensional method for reconstructing and device |
CN110335343B (en) * | 2019-06-13 | 2021-04-06 | 清华大学 | Human body three-dimensional reconstruction method and device based on RGBD single-view-angle image |
CN110363858B (en) * | 2019-06-18 | 2022-07-01 | 新拓三维技术(深圳)有限公司 | Three-dimensional face reconstruction method and system |
CN110363858A (en) * | 2019-06-18 | 2019-10-22 | 新拓三维技术(深圳)有限公司 | A kind of three-dimensional facial reconstruction method and system |
CN110400369A (en) * | 2019-06-21 | 2019-11-01 | 苏州狗尾草智能科技有限公司 | A kind of method of human face rebuilding, system platform and storage medium |
CN110533773A (en) * | 2019-09-02 | 2019-12-03 | 北京华捷艾米科技有限公司 | A kind of three-dimensional facial reconstruction method, device and relevant device |
CN110689625A (en) * | 2019-09-06 | 2020-01-14 | 清华大学 | Automatic generation method and device for customized face mixed expression model |
CN110689625B (en) * | 2019-09-06 | 2021-07-16 | 清华大学 | Automatic generation method and device for customized face mixed expression model |
WO2021093453A1 (en) * | 2019-11-15 | 2021-05-20 | 腾讯科技(深圳)有限公司 | Method for generating 3d expression base, voice interactive method, apparatus and medium |
US11748934B2 (en) | 2019-11-15 | 2023-09-05 | Tencent Technology (Shenzhen) Company Limited | Three-dimensional expression base generation method and apparatus, speech interaction method and apparatus, and medium |
CN111105881A (en) * | 2019-12-26 | 2020-05-05 | 昆山杜克大学 | Database system for 3D measurement of human phenotype |
CN111105881B (en) * | 2019-12-26 | 2022-02-01 | 昆山杜克大学 | Database system for 3D measurement of human phenotype |
CN111462204A (en) * | 2020-02-13 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Virtual model generation method, virtual model generation device, storage medium, and electronic device |
CN111462204B (en) * | 2020-02-13 | 2023-03-03 | 腾讯科技(深圳)有限公司 | Virtual model generation method, virtual model generation device, storage medium, and electronic device |
CN111968235A (en) * | 2020-07-08 | 2020-11-20 | 杭州易现先进科技有限公司 | Object attitude estimation method, device and system and computer equipment |
CN111968235B (en) * | 2020-07-08 | 2024-04-12 | 杭州易现先进科技有限公司 | Object attitude estimation method, device and system and computer equipment |
CN112597901B (en) * | 2020-12-23 | 2023-12-29 | 艾体威尔电子技术(北京)有限公司 | Device and method for effectively recognizing human face in multiple human face scenes based on three-dimensional ranging |
CN112597901A (en) * | 2020-12-23 | 2021-04-02 | 艾体威尔电子技术(北京)有限公司 | Multi-face scene effective face recognition device and method based on three-dimensional distance measurement |
CN113421292A (en) * | 2021-06-25 | 2021-09-21 | 北京华捷艾米科技有限公司 | Three-dimensional modeling detail enhancement method and device |
CN113763559A (en) * | 2021-07-01 | 2021-12-07 | 清华大学 | Geometric motion detail reconstruction method and device for fitting depth image |
CN113763559B (en) * | 2021-07-01 | 2024-04-09 | 清华大学 | Geometric motion detail reconstruction method for fitting depth image |
CN114049464A (en) * | 2021-11-15 | 2022-02-15 | 聚好看科技股份有限公司 | Reconstruction method and device of three-dimensional model |
CN114049464B (en) * | 2021-11-15 | 2024-09-27 | 聚好看科技股份有限公司 | Reconstruction method and device of three-dimensional model |
CN114445514A (en) * | 2022-01-26 | 2022-05-06 | 四川大学 | Template data generation and application method based on magnetic resonance scanning image |
CN114445514B (en) * | 2022-01-26 | 2023-04-07 | 四川大学 | Template data generation and application method based on magnetic resonance scanning image |
CN114863506A (en) * | 2022-03-18 | 2022-08-05 | 珠海优特电力科技股份有限公司 | Method, device and system for verifying access permission and identity authentication terminal |
CN116664746A (en) * | 2023-05-29 | 2023-08-29 | 华院计算技术(上海)股份有限公司 | Face reconstruction method and device, computer readable storage medium and terminal |
CN116664746B (en) * | 2023-05-29 | 2024-04-02 | 华院计算技术(上海)股份有限公司 | Face reconstruction method and device, computer readable storage medium and terminal |
CN116704587A (en) * | 2023-08-02 | 2023-09-05 | 山东建筑大学 | Multi-person head pose estimation method and system integrating texture information and depth information |
CN116704587B (en) * | 2023-08-02 | 2023-10-20 | 山东建筑大学 | Multi-person head pose estimation method and system integrating texture information and depth information |
Also Published As
Publication number | Publication date |
---|---|
CN108154550B (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108154550A (en) | Face real-time three-dimensional method for reconstructing based on RGBD cameras | |
WO2019219013A1 (en) | Three-dimensional reconstruction method and system for joint optimization of human body posture model and appearance model | |
US10846903B2 (en) | Single shot capture to animated VR avatar | |
CN104715493B (en) | A kind of method of movement human Attitude estimation | |
CN104680582B (en) | A kind of three-dimensional (3 D) manikin creation method of object-oriented customization | |
Alexiadis et al. | An integrated platform for live 3D human reconstruction and motion capturing | |
CN106327571B (en) | A kind of three-dimensional face modeling method and device | |
CN105631861B (en) | Restore the method for 3 D human body posture from unmarked monocular image in conjunction with height map | |
CN104992441B (en) | A kind of real human body three-dimensional modeling method towards individualized virtual fitting | |
CN109360240A (en) | A kind of small drone localization method based on binocular vision | |
CN107186708A (en) | Trick servo robot grasping system and method based on deep learning image Segmentation Technology | |
CN110148217A (en) | A kind of real-time three-dimensional method for reconstructing, device and equipment | |
WO2019219014A1 (en) | Three-dimensional geometry and eigencomponent reconstruction method and device based on light and shadow optimization | |
CN110807364A (en) | Modeling and capturing method and system for three-dimensional face and eyeball motion | |
Ye et al. | Free-viewpoint video of human actors using multiple handheld kinects | |
CN113077519B (en) | Multi-phase external parameter automatic calibration method based on human skeleton extraction | |
CN104376599A (en) | Handy three-dimensional head model generation system | |
CN104915978A (en) | Realistic animation generation method based on Kinect | |
CN108629828B (en) | Scene rendering transition method in the moving process of three-dimensional large scene | |
CN111489392A (en) | Single target human motion posture capturing method and system in multi-person environment | |
Cheung et al. | Markerless human motion transfer | |
CN112183316A (en) | Method for measuring human body posture of athlete | |
Darujati et al. | Facial motion capture with 3D active appearance models | |
CN108010122A (en) | A kind of human 3d model rebuilds the method and system with measurement | |
CN108961393A (en) | A kind of human body modeling method and device based on point cloud data stream |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant after: Obi Zhongguang Technology Group Co., Ltd Address before: A808, Zhongdi building, industry university research base, China University of Geosciences, No.8, Yuexing Third Road, Nanshan District, Shenzhen, Guangdong 518000 Applicant before: SHENZHEN ORBBEC Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |