CN112562083A - Depth camera-based static portrait three-dimensional reconstruction and dynamic face fusion method - Google Patents
Depth camera-based static portrait three-dimensional reconstruction and dynamic face fusion method Download PDFInfo
- Publication number
- CN112562083A CN112562083A CN202011437377.5A CN202011437377A CN112562083A CN 112562083 A CN112562083 A CN 112562083A CN 202011437377 A CN202011437377 A CN 202011437377A CN 112562083 A CN112562083 A CN 112562083A
- Authority
- CN
- China
- Prior art keywords
- portrait
- dimensional
- dynamic
- static
- dimensional grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003068 static effect Effects 0.000 title claims abstract description 64
- 238000007500 overflow downdraw method Methods 0.000 title description 2
- 238000000034 method Methods 0.000 claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 230000036544 posture Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 6
- 230000002146 bilateral effect Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims 4
- 238000010276 construction Methods 0.000 claims 1
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The invention discloses a method for three-dimensional reconstruction of a static portrait and fusion of a dynamic face based on a depth camera, which relates to the technical field of image reconstruction, and comprises the steps of obtaining a color sequence frame and a depth sequence frame of the static portrait in a region range set by the depth camera, calculating the minimum value of the distance difference between two adjacent groups of point clouds in the point cloud sequence by adopting an iterative closest point algorithm, respectively segmenting and matting each color frame in the color sequence frame to obtain a plurality of portrait masks, projecting the color information of each color frame into corresponding space voxels according to the relative posture, calculating the average value of the color information, assigning the average value to each triangular face of the three-dimensional mesh of the static portrait, obtaining the three-dimensional mesh of the static portrait with the color information, and constructing the three-dimensional mesh of the dynamic portrait in real time according to the depth information of the depth camera in the mask of the dynamic portrait, the integrity of the portrait three-dimensional model, the real-time driving of the complete portrait three-dimensional model and the dynamic real-time three-dimensional reconstruction of the portrait are realized.
Description
Technical Field
The invention relates to the technical field of image reconstruction, in particular to a method for three-dimensional reconstruction of a static portrait and fusion of a dynamic face based on a depth camera.
Background
At present, a desktop-level real-time three-dimensional portrait reconstruction scheme adopts a human body template algorithm, and fits a three-dimensional portrait through parameterized models such as 3DMM and SMPL. The other type directly carries out gridding processing and mapping on the depth information, but lacks the information on the back of the portrait, and leads to the incompleteness of the three-dimensional model.
Disclosure of Invention
In order to overcome the defects of the prior art, the embodiment of the invention provides a method for three-dimensional reconstruction of a static portrait and fusion of a dynamic face based on a depth camera, which comprises the following steps:
(1) generating a portrait point cloud:
(11) acquiring a color sequence frame and a depth sequence frame of a static portrait in a region range set by a depth camera;
(12) respectively carrying out bilateral filtering denoising treatment on each depth frame in the depth sequence frames, and projecting the depth information of each denoised depth frame into a three-dimensional space to form a corresponding point cloud sequence;
(13) calculating the minimum value of the distance difference between two adjacent groups of point clouds in the point cloud sequence by adopting an iterative closest point algorithm;
(14) calculating a relative pose between depth cameras that capture the two sets of point clouds according to the minimum value;
(15) transferring each point cloud under a uniform three-dimensional coordinate system according to the relative posture to generate a portrait point cloud;
(2) and (3) complementing the static portrait:
(21) respectively carrying out segmentation matting on each color frame in the color sequence frames to obtain a plurality of portrait masks;
(22) projecting the multiple portrait masks into a three-dimensional space according to the relative postures to generate a complete three-dimensional portrait outline;
(22) fusing the three-dimensional portrait outline and the portrait point cloud in a three-dimensional space to generate a static portrait three-dimensional grid;
(3) static portrait color fusion:
(31) according to the relative posture of the color sequence frames, projecting the color information of each color frame to a corresponding space voxel;
(32) calculating the average value of the color information and assigning the average value to each triangular panel of the static portrait three-dimensional grid to obtain the static portrait three-dimensional grid with the color information;
(4) constructing a dynamic portrait three-dimensional grid:
(41) obtaining the coordinate information of the face key points of each frame of the static portrait three-dimensional grid by using a face key point detection algorithm;
(42) taking a convex hull formed by each face key point as a portrait mask, and expanding the range of the portrait mask outwards according to a set proportion to obtain a dynamic portrait mask;
(43) and constructing a dynamic portrait three-dimensional grid in real time according to the depth information of the depth camera in the dynamic portrait mask.
Preferably, fusing the three-dimensional portrait outline and the portrait point cloud in a three-dimensional space to generate a static portrait three-dimensional mesh comprises:
dividing the three-dimensional coordinate system into a plurality of space voxels, and respectively judging that each space voxel is occupied by any point of the portrait point cloud or the three-dimensional portrait outline;
and if so, taking the triangular surface patch as the isosurface inside each processed space voxel according to a three-dimensional isosurface extraction algorithm.
Optionally, after the dynamic portrait three-dimensional mesh is constructed in real time according to the depth information of the depth camera in the dynamic portrait mask, the method further includes:
matching a dynamic portrait mask of a current user with a static portrait three-dimensional grid according to the coordinate information of the key points of the face, and enabling each splicing point at the edge of the dynamic portrait mask to generate a one-to-one corresponding relation with the closest point of the splicing point in the static portrait three-dimensional grid;
and establishing a rotary displacement matrix according to the one-to-one correspondence relationship, and calculating the coordinate position of the dynamic portrait mask at the closest point of the static portrait three-dimensional grid according to the real-time coordinate position of the dynamic portrait mask so as to drive the static portrait three-dimensional grid, so that the correct pose relationship is kept between the static portrait three-dimensional grid and the dynamic human face.
Optionally, after the dynamic portrait three-dimensional mesh is constructed in real time according to the depth information of the depth camera in the dynamic portrait mask, the method further includes:
deleting the static portrait three-dimensional grid in the dynamic portrait mask, and fusing the current static portrait three-dimensional grid and the dynamic portrait three-dimensional grid in each space voxel to obtain a fused portrait three-dimensional grid;
obtaining a first portrait three-dimensional grid according to a three-dimensional isosurface extraction algorithm;
and smoothing the portrait three-dimensional grid to obtain a second portrait three-dimensional grid.
Optionally, after the dynamic portrait three-dimensional mesh is constructed in real time according to the depth information of the depth camera in the dynamic portrait mask, the method further includes:
and projecting the color information in the dynamic portrait mask to a corresponding space voxel and assigning the color information to each triangular face sheet of the second portrait three-dimensional grid to obtain a third portrait three-dimensional grid.
The method for three-dimensional reconstruction of the static portrait and fusion of the dynamic face based on the depth camera has the following beneficial effects:
(1) by fusing the portrait point cloud and the portrait three-dimensional outline, parts which cannot be shot by the depth camera or have poor effects are completed, so that the integrity of the portrait three-dimensional model is realized;
(2) by fusing the static three-dimensional model and the dynamic three-dimensional model for splicing, the real-time driving of the complete portrait three-dimensional model and the dynamic real-time three-dimensional reconstruction of the portrait are realized.
Detailed Description
The present invention will be described in detail with reference to the following embodiments.
The embodiment of the invention provides a method for three-dimensional reconstruction of a static portrait and fusion of a dynamic face based on a depth camera, which comprises the following steps:
s101, generating a portrait point cloud, which specifically comprises the following steps:
and S1011, acquiring a color sequence frame and a depth sequence frame of the static portrait in the area range set by the depth camera.
As a specific embodiment of the present invention, spatial voxels are first partitioned within a fixed region in front of the depth camera, and the data of regions outside the spatial voxels will be discarded. And the user rotates in situ for one circle at a constant speed in front of the depth camera, keeps the upper body stable and the facial expression unchanged as much as possible, and respectively stores the color sequence frame and the depth sequence frame shot by the camera.
And S1012, respectively carrying out bilateral filtering denoising treatment on each depth frame in the depth sequence frames, and projecting the depth information of each denoised depth frame into a three-dimensional space to form a corresponding point cloud sequence.
And S1013, calculating the minimum value of the distance difference between two adjacent groups of point clouds in the point cloud sequence by adopting an iterative closest point algorithm.
And S1014, calculating the relative posture between the depth cameras for shooting the two groups of point clouds according to the minimum value.
And S1015, transferring the point clouds under a unified three-dimensional coordinate system according to the relative postures to generate portrait point clouds.
S102, completing the static portrait, which specifically comprises the following steps:
and S1021, segmenting and matting each color frame in the color sequence frames respectively to obtain a plurality of portrait masks.
And S1022, according to the relative posture, projecting the multiple portrait masks into a three-dimensional space to generate a complete three-dimensional portrait outline.
And S1023, fusing the three-dimensional portrait outline and the portrait point cloud in the three-dimensional space to generate a static portrait three-dimensional grid.
S103, fusing static portrait colors, which specifically comprises:
and S1031, projecting the color information of each color frame to the corresponding spatial voxel according to the relative posture of the color sequence frame.
And S1032, calculating the average value of the color information and assigning the average value to each triangular panel of the static portrait three-dimensional grid to obtain the static portrait three-dimensional grid with the color information.
S104, constructing a dynamic portrait three-dimensional grid, which specifically comprises the following steps:
s1041, obtaining the coordinate information of the face key point of each frame of the static portrait three-dimensional grid by using a face key point detection algorithm.
S1042, the convex hull formed by the key points of each face is used as a portrait mask, and the range of the portrait mask is expanded outwards according to a set proportion to obtain a dynamic portrait mask.
As a specific example, the range of the portrait mask is expanded outward by 10% to be used as a dynamic portrait mask.
And S1043, constructing a dynamic portrait three-dimensional grid in real time according to the depth information of the depth camera in the dynamic portrait mask.
Optionally, fusing the three-dimensional portrait outline and the portrait point cloud in the three-dimensional space, and generating the static portrait three-dimensional mesh includes:
dividing the three-dimensional coordinate system into a plurality of space voxels, and respectively judging that each space voxel is occupied by any point of a portrait point cloud or a three-dimensional portrait outline;
and if so, taking the triangular surface patch as an isosurface inside the space voxel for each processed space voxel according to a three-dimensional isosurface extraction algorithm.
Optionally, after the dynamic portrait three-dimensional mesh is constructed in real time according to the depth information of the depth camera in the dynamic portrait mask, the method further includes:
matching a dynamic portrait mask of a current user with a static portrait three-dimensional grid according to the coordinate information of the key points of the face, and enabling each splicing point at the edge of the dynamic portrait mask to generate a one-to-one corresponding relation with the closest point of the splicing point in the static portrait three-dimensional grid;
and establishing a rotary displacement matrix according to the one-to-one correspondence relationship, and calculating the coordinate position of the dynamic portrait mask at the closest point of the static portrait three-dimensional grid according to the real-time coordinate position of the dynamic portrait mask so as to drive the static portrait three-dimensional grid, so that the correct pose relationship is kept between the static portrait three-dimensional grid and the dynamic human face.
Optionally, after the dynamic portrait three-dimensional mesh is constructed in real time according to the depth information of the depth camera in the dynamic portrait mask, the method further includes:
deleting the static portrait three-dimensional grid in the dynamic portrait mask, and fusing the current static portrait three-dimensional grid and the dynamic portrait three-dimensional grid in each space voxel to obtain a fused portrait three-dimensional grid;
obtaining a first portrait three-dimensional grid according to a three-dimensional isosurface extraction algorithm;
and smoothing the three-dimensional grid of the portrait to obtain a second three-dimensional grid of the portrait.
Optionally, after the dynamic portrait three-dimensional mesh is constructed in real time according to the depth information of the depth camera in the dynamic portrait mask, the method further includes:
and projecting the color information in the dynamic portrait mask to the corresponding space voxel and assigning the color information to each triangular panel of the second portrait three-dimensional grid to obtain a third portrait three-dimensional grid.
The method for three-dimensional reconstruction and dynamic face fusion of a static portrait based on a depth camera comprises the steps of obtaining a color sequence frame and a depth sequence frame of the static portrait in a region range set by the depth camera, respectively carrying out bilateral filtering denoising treatment on each depth frame in the depth sequence frame, projecting depth information of each denoised depth frame into a three-dimensional space to form a corresponding point cloud sequence, calculating the minimum value of distance difference between two adjacent groups of point clouds in the point cloud sequence by adopting an iterative closest point algorithm, calculating the relative posture between the depth cameras for shooting the two groups of point clouds according to the minimum value, transferring each point cloud under a uniform three-dimensional coordinate system according to the relative posture to generate a portrait point cloud, respectively carrying out segmentation matting on each color frame in the color sequence frame to obtain a plurality of portrait masks, and according to the relative posture, projecting a plurality of portrait masks into a three-dimensional space to generate a complete three-dimensional portrait profile, fusing the three-dimensional portrait profile and portrait point clouds in the three-dimensional space to generate a static portrait three-dimensional grid, projecting the color information of each color frame into a corresponding space voxel according to the relative posture of a color sequence frame, calculating the average value of the color information and assigning the average value to each triangular panel of the static portrait three-dimensional grid to obtain the static portrait three-dimensional grid with the color information, obtaining the face key point coordinate information of each frame of the static portrait three-dimensional grid by using a face key point detection algorithm, taking a convex hull formed by each face key point as the portrait mask, expanding the range of the portrait mask outwards according to a set proportion to obtain a dynamic portrait mask, and constructing the dynamic portrait three-dimensional grid in real time according to the depth information of a depth camera in the dynamic portrait mask, the integrity of the portrait three-dimensional model, the real-time driving of the complete portrait three-dimensional model and the dynamic real-time three-dimensional reconstruction of the portrait are realized.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method and apparatus described above are referred to one another.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (7)
1. A method for three-dimensional reconstruction of static portrait and dynamic face fusion based on a depth camera is characterized by comprising the following steps:
(1) generating a portrait point cloud:
(11) acquiring a color sequence frame and a depth sequence frame of a static portrait in a region range set by a depth camera;
(12) respectively carrying out bilateral filtering denoising treatment on each depth frame in the depth sequence frames, and projecting the depth information of each denoised depth frame into a three-dimensional space to form a corresponding point cloud sequence;
(13) calculating the minimum value of the distance difference between two adjacent groups of point clouds in the point cloud sequence by adopting an iterative closest point algorithm;
(14) calculating a relative pose between depth cameras that capture the two sets of point clouds according to the minimum value;
(15) transferring each point cloud under a uniform three-dimensional coordinate system according to the relative posture to generate a portrait point cloud;
(2) and (3) complementing the static portrait:
(21) respectively carrying out segmentation matting on each color frame in the color sequence frames to obtain a plurality of portrait masks;
(22) projecting the multiple portrait masks into a three-dimensional space according to the relative postures to generate a complete three-dimensional portrait outline;
(22) fusing the three-dimensional portrait outline and the portrait point cloud in a three-dimensional space to generate a static portrait three-dimensional grid;
(3) static portrait color fusion:
(31) according to the relative posture of the color sequence frames, projecting the color information of each color frame to a corresponding space voxel;
(32) calculating the average value of the color information and assigning the average value to each triangular panel of the static portrait three-dimensional grid to obtain the static portrait three-dimensional grid with the color information;
(4) constructing a dynamic portrait three-dimensional grid:
(41) obtaining the coordinate information of the face key points of each frame of the static portrait three-dimensional grid by using a face key point detection algorithm;
(42) taking a convex hull formed by each face key point as a portrait mask, and expanding the range of the portrait mask outwards according to a set proportion to obtain a dynamic portrait mask;
(43) and constructing a dynamic portrait three-dimensional grid in real time according to the depth information of the depth camera in the dynamic portrait mask.
2. The method of claim 1, wherein fusing the three-dimensional portrait contour and the portrait point cloud in a three-dimensional space to generate a three-dimensional grid of the static portrait comprises:
dividing the three-dimensional coordinate system into a plurality of space voxels, and respectively judging that each space voxel is occupied by any point of the portrait point cloud or the three-dimensional portrait outline;
and if so, taking the triangular surface patch as the isosurface inside each processed space voxel according to a three-dimensional isosurface extraction algorithm.
3. The method of claim 1, wherein after the dynamic portrait three-dimensional mesh is constructed in real-time according to the depth information of the depth camera in the dynamic portrait mask, the method further comprises:
matching a dynamic portrait mask of a current user with a static portrait three-dimensional grid according to the coordinate information of the key points of the face, and enabling each splicing point at the edge of the dynamic portrait mask to generate a one-to-one corresponding relation with the closest point of the dynamic portrait mask in the static portrait three-dimensional grid;
and establishing a rotary displacement matrix according to the one-to-one correspondence relationship, and calculating the coordinate position of the dynamic portrait mask at the closest point of the static portrait three-dimensional grid according to the real-time coordinate position of the dynamic portrait mask so as to drive the static portrait three-dimensional grid, so that the correct pose relationship is kept between the static portrait three-dimensional grid and the dynamic human face.
4. The method of claim 1, wherein after the dynamic portrait three-dimensional mesh is constructed in real-time according to the depth information of the depth camera in the dynamic portrait mask, the method further comprises:
deleting the static portrait three-dimensional grid in the dynamic portrait mask, and fusing the current static portrait three-dimensional grid and the dynamic portrait three-dimensional grid in each space voxel to obtain a fused portrait three-dimensional grid;
obtaining a first portrait three-dimensional grid according to a three-dimensional isosurface extraction algorithm;
and smoothing the portrait three-dimensional grid to obtain a second portrait three-dimensional grid.
5. The method for three-dimensional reconstruction of static portrait and dynamic face fusion based on depth camera as claimed in claim 1 or 4, wherein after real-time construction of dynamic portrait three-dimensional mesh according to depth information of the depth camera in dynamic portrait mask, the method further comprises:
and projecting the color information in the dynamic portrait mask to a corresponding space voxel and assigning the color information to each triangular face sheet of the second portrait three-dimensional grid to obtain a third portrait three-dimensional grid.
6. A computer program product, characterized in that the computer program product comprises a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of claims 1-5.
7. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011437377.5A CN112562083A (en) | 2020-12-10 | 2020-12-10 | Depth camera-based static portrait three-dimensional reconstruction and dynamic face fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011437377.5A CN112562083A (en) | 2020-12-10 | 2020-12-10 | Depth camera-based static portrait three-dimensional reconstruction and dynamic face fusion method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112562083A true CN112562083A (en) | 2021-03-26 |
Family
ID=75060433
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011437377.5A Pending CN112562083A (en) | 2020-12-10 | 2020-12-10 | Depth camera-based static portrait three-dimensional reconstruction and dynamic face fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112562083A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344942A (en) * | 2021-05-21 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Human body massage region segmentation method, device and system and computer storage medium |
CN115294277A (en) * | 2022-08-10 | 2022-11-04 | 广州沃佳科技有限公司 | Three-dimensional reconstruction method and device of object, electronic equipment and storage medium |
WO2023078135A1 (en) * | 2021-11-02 | 2023-05-11 | 上海商汤智能科技有限公司 | Three-dimensional modeling method and apparatus, computer-readable storage medium, and computer device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
WO2017026839A1 (en) * | 2015-08-12 | 2017-02-16 | 트라이큐빅스 인크. | 3d face model obtaining method and device using portable camera |
WO2019196308A1 (en) * | 2018-04-09 | 2019-10-17 | 平安科技(深圳)有限公司 | Device and method for generating face recognition model, and computer-readable storage medium |
CN111243093A (en) * | 2020-01-07 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Three-dimensional face grid generation method, device, equipment and storage medium |
CN111383307A (en) * | 2018-12-29 | 2020-07-07 | 上海智臻智能网络科技股份有限公司 | Video generation method and device based on portrait and storage medium |
CN111968238A (en) * | 2020-08-22 | 2020-11-20 | 晋江市博感电子科技有限公司 | Human body color three-dimensional reconstruction method based on dynamic fusion algorithm |
-
2020
- 2020-12-10 CN CN202011437377.5A patent/CN112562083A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
WO2017026839A1 (en) * | 2015-08-12 | 2017-02-16 | 트라이큐빅스 인크. | 3d face model obtaining method and device using portable camera |
WO2019196308A1 (en) * | 2018-04-09 | 2019-10-17 | 平安科技(深圳)有限公司 | Device and method for generating face recognition model, and computer-readable storage medium |
CN111383307A (en) * | 2018-12-29 | 2020-07-07 | 上海智臻智能网络科技股份有限公司 | Video generation method and device based on portrait and storage medium |
CN111243093A (en) * | 2020-01-07 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Three-dimensional face grid generation method, device, equipment and storage medium |
CN111968238A (en) * | 2020-08-22 | 2020-11-20 | 晋江市博感电子科技有限公司 | Human body color three-dimensional reconstruction method based on dynamic fusion algorithm |
Non-Patent Citations (1)
Title |
---|
朱新山;王宏星;黄向生;: "基于单个深度相机的带纹理人脸实时重建算法", 传感器与微系统, no. 08, 20 August 2017 (2017-08-20) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344942A (en) * | 2021-05-21 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Human body massage region segmentation method, device and system and computer storage medium |
CN113344942B (en) * | 2021-05-21 | 2024-04-02 | 深圳瀚维智能医疗科技有限公司 | Human body massage region segmentation method, device and system and computer storage medium |
WO2023078135A1 (en) * | 2021-11-02 | 2023-05-11 | 上海商汤智能科技有限公司 | Three-dimensional modeling method and apparatus, computer-readable storage medium, and computer device |
CN115294277A (en) * | 2022-08-10 | 2022-11-04 | 广州沃佳科技有限公司 | Three-dimensional reconstruction method and device of object, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363858B (en) | Three-dimensional face reconstruction method and system | |
CN109196561B (en) | System and method for three-dimensional garment mesh deformation and layering for fitting visualization | |
CN109003325B (en) | Three-dimensional reconstruction method, medium, device and computing equipment | |
CN112562083A (en) | Depth camera-based static portrait three-dimensional reconstruction and dynamic face fusion method | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
CN107945267B (en) | Method and equipment for fusing textures of three-dimensional model of human face | |
CN107507217B (en) | Method and device for making certificate photo and storage medium | |
CN107564080B (en) | Face image replacement system | |
CN110930334B (en) | Grid denoising method based on neural network | |
Sýkora et al. | Textoons: practical texture mapping for hand-drawn cartoon animations | |
JP2013097806A (en) | Video processor and video processing method | |
JP2023526566A (en) | fast and deep facial deformation | |
EP3772040A1 (en) | Method and computer program product for producing 3-dimensional model data of a garment | |
CN109147025B (en) | RGBD three-dimensional reconstruction-oriented texture generation method | |
CN110097626A (en) | A kind of basse-taille object identification processing method based on RGB monocular image | |
JP7294788B2 (en) | Classification of 2D images according to the type of 3D placement | |
Hervieu et al. | Stereoscopic image inpainting: distinct depth maps and images inpainting | |
CN112651881B (en) | Image synthesizing method, apparatus, device, storage medium, and program product | |
CN112734890A (en) | Human face replacement method and device based on three-dimensional reconstruction | |
US20210012568A1 (en) | Methods, devices and computer program products for gradient based depth reconstructions with robust statistics | |
CN111382618B (en) | Illumination detection method, device, equipment and storage medium for face image | |
CN116958453B (en) | Three-dimensional model reconstruction method, device and medium based on nerve radiation field | |
CN112509109A (en) | Single-view illumination estimation method based on neural network model | |
CN109461197B (en) | Cloud real-time drawing optimization method based on spherical UV and re-projection | |
CN108655571A (en) | A kind of digital-control laser engraving machine, control system and control method, computer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |