CN113112606A - Face correction method, system and storage medium based on three-dimensional live-action modeling - Google Patents
Face correction method, system and storage medium based on three-dimensional live-action modeling Download PDFInfo
- Publication number
- CN113112606A CN113112606A CN202110409028.0A CN202110409028A CN113112606A CN 113112606 A CN113112606 A CN 113112606A CN 202110409028 A CN202110409028 A CN 202110409028A CN 113112606 A CN113112606 A CN 113112606A
- Authority
- CN
- China
- Prior art keywords
- depth
- plane
- movement
- grid
- depth direction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a face correction method, a face correction system and a storage medium based on three-dimensional live-action modeling, wherein the face correction method comprises the following steps: selecting a face correction area with a grid of Mf={P1,P2,P3,…,PiZ in the depth directionhUnit vector in depth direction of nzh=(xzh,yzh,zzh) (ii) a Setting the number of deep hierarchies NhEqual interval generation of depth layers Set of depth planes H ═ H1,h2,h3,…,hiTherein ofRepresenting the ith layer depth planeA set of midpoints; determining the moving mode of the mesh point cloud, the correction depth factor alpha in the depth direction and the correction depth factor beta in the vertical plane direction of the depth direction, and obtaining the mesh M generated after movingfc={P1',P2',P3',…,Pi' }; grid Mfc={P1',P2',P3',…,Pi' } carrying out interpolation algorithm to obtain grid image M with grid after interpolation being correctedfl={P1”,P2”,P3”…,Pi"}. According to the invention, a face automatic correction process is added in the three-dimensional live-action modeling process to restore the missing depth information in the modeling process and restore the three-dimensional sense of the face.
Description
Technical Field
The invention relates to the technical field of images, in particular to a face correction method, a face correction system and a storage medium based on three-dimensional live-action modeling.
Background
The three-dimensional live-action modeling is to generate a three-dimensional model which is highly realistic to the current scene by taking photos from different angles. Due to the reality, flexibility and abundant model sources, the application of three-dimensional modeling is greatly promoted, and the method is widely applied to the fields of smart cities, games, movie and television entertainment, advertising and the like.
In three-dimensional live-action modeling of a person, accurate modeling of the face of the person is an important element in modeling the person. At present, limited by an acquisition technology and a modeling technology, depth errors of about 1cm exist in depth calculation in character three-dimensional live-action modeling, so that the character three-dimensional live-action modeling deforms to some extent. Because the depth change of the human face is small, the small depth error has large influence on the depth reconstruction of the human face, and particularly, the compression of the depth of the face influences the three-dimensional effect of the human face and easily causes the expansion of the front face of the face.
The existing three-dimensional live-action modeling method for human faces generally comprises four steps: after the 3D mesh is generated, the size and shape integrity of the model is basically formed, the model can be modified only by manually adjusting the mesh part, and the face depth cannot be automatically corrected.
Therefore, the prior art has yet to be developed.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a face correction method, a face correction system and a storage medium based on three-dimensional live-action modeling, and aims to add a face automatic correction process in the three-dimensional live-action modeling process so as to restore the missing depth information in the modeling process and restore the three-dimensional sense of the face.
In order to achieve the above purpose, the invention provides the following technical scheme:
the invention provides a face correction method based on three-dimensional live-action modeling, which comprises the following steps:
s100, selecting a face correction area, wherein the grid of the face correction area is Mf={P1,P2,P3,…,PiZ in the depth directionhUnit vector in depth direction of nzh=(xzh,yzh,zzh);
S200, setting the number N of deep layershEqual interval generation of depth layersSet of depth planes H ═ H1,h2,h3,…,hiTherein ofRepresenting the ith layer depth planeA set of midpoints;
s300, determining a moving mode of the mesh point cloud, a correction depth factor alpha in the depth direction and a correction depth factor beta in the depth direction vertical plane direction, and solving a mesh M generated after movingfc={P1',P2',P3',…,Pi'};
S400, for grid Mfc={P1',P2',P3',…,Pi' } carrying out interpolation algorithm to obtain grid image M with grid after interpolation being correctedfl={P1”,P2”,P3”…,Pi”}。
if point PqIs located inAndpoint P betweenqAnd distances are respectively di q、If it isThen point Pq∈hiOn the contrary Pq∈hi+1Obtaining a neighborhood of the depth plane as
Further, if the moving mode of the mesh point cloud is uniform moving along the depth direction and the vertical plane direction of the depth direction, the depth direction is obtainedThe point set after the direction movement is HcThe set of points moved along the vertical plane in the depth direction is WcThe depth surface point set after the two-time movement is Hwc=Hc+Wc-H。
Further, the set of points after moving in the depth direction is HcThe calculation process of (2) is as follows:
along zhSelecting a starting surfaceAnd a stop surfaceStarting surfaceAnd a stop surfaceA distance of d1sThen the distance between adjacent depth planes before correction isCorrected initial surfaceAnd the stop surfaceA distance d1sc=(1+α)d1sAt a distance of adjacent depth planes of
Setting the reference plane of movement as the starting planeAnd a stop surfaceIntermediate surface ofThe point set on the depth plane expands and moves in the depth direction by a movement amount tsi=(tsx,tsy,tsz) Deep surface ofTranslating to a new positionDistance of movementThe translation unit direction vector is nti=nzh=(xzh,yzh,zzh) Thus, the amount of movement of the depth plane is obtained as:
wherein t is the difference of the moving distance of the adjacent depth surfaces;
wherein mid is floor (N)h/2), and s ═ Nh。
Further, the set of points moved along the vertical plane in the depth direction is WcThe calculation process of (2) is as follows:
let the direction unit vector of the depth-direction vertical plane be nxw=(xxw,yxw,zxw) Along xwSelecting a starting surfaceAnd a stop surfaceStarting surfaceAnd a stop surfaceA distance of d1wThen correct the initial surfaceAnd the stop surfaceA distance d1wc=(1-β)d1w;
Setting the reference plane of movement as the starting planeAnd a stop surfaceIntermediate surface ofThe point set on the depth plane is compressed and moved in the direction perpendicular to the depth direction by the following amount:
wherein t' is the difference of the moving distances of the adjacent depth surfaces;
wherein mid is floor (N)w/2), and s ═ Nw。
Further, let grid M of the selected area before movingfThe circumscribed cube volume of the enclosed region is Vf0;
After moving in the depth direction, the area grid is MfαThe circumscribed cubic volume of the region enclosed by it is VfαThen there is Vfα=(1+α)Vf0;
After moving along the vertical plane direction of the depth direction, the area grid is MfβThe circumscribed cubic volume of the region enclosed by it is VfβThen there is Vfβ=(1-β)Vfα=(1+α)(1-β)Vf0Wherein (1+ α) (1- β) ═ 1.
Further, if the moving mode of the mesh point cloud is moving along the normal direction, the normal direction n is adjustedpi=(nx,ny,nz) Respectively projected to the depth direction zhPerpendicular to zhY of (A) to (B)wPlane of formationAnd perpendicular to zhX ofw、ywPlane of formationThe set of points after the move is:
further, for the grid Mfc={P1',P2',P3',…,Pi' } interpolation algorithms include, but are not limited to, bilinear interpolation, cubic spline interpolation, or linear interpolation triangulation.
The invention also provides a system comprising a memory, a processor and a computer program stored in the memory and configured to be executed by the processor, wherein the processor implements the face correction method as described above when executing the computer program.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements the face correction method as described above.
The technical scheme of the invention has the following beneficial effects:
the face correction method of the invention restores the missing depth information in the modeling process by adding the face automatic correction process in the three-dimensional live-action modeling process, and restores the three-dimensional sense of the face.
Drawings
FIG. 1 is a schematic flow chart of the face correction method of the present invention;
FIG. 2 is a schematic view of the present invention layered in the depth direction;
fig. 3 is a schematic view of the layering of the present invention in a vertical plane in the depth direction.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the specific embodiments described herein are only for explaining the present invention and are not intended to limit the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as up, down, left, right, front, and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In the present invention, unless expressly stated or limited otherwise, the terms "connected," "secured," and the like are to be construed broadly, and for example, "connected" may be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature.
Referring to fig. 1, the present invention provides a face correction method based on three-dimensional live-action modeling, which includes the following steps:
s100, selecting a face correction area, wherein the grid of the face correction area is Mf={P1,P2,P3,…,PiZ in the depth directionhUnit vector in depth direction of nzh=(xzh,yzh,zzh);
Wherein, the 3D mesh is composed of triangular, quadrilateral (or polygonal) relational mesh formed by dense point cloud according to position relation, each point in the mesh can be represented as Pi(x,y,z,nx,ny,nzR, g, b), where x, y, z are the position coordinates of the point, nx、ny、nzIs the normal direction vector of the point, and r, g, b are the RGB colors of the point.
S200, setting the number N of deep layershEqual interval generation of depth layersSet of depth planes H ═ H1,h2,h3,…,hiTherein ofRepresenting the ith layer depth planeA set of midpoints;
s300, determining a moving mode of the mesh point cloud, a correction depth factor alpha in the depth direction and a correction depth factor beta in the depth direction vertical plane direction, and solving a mesh M generated after movingfc={P1',P2',P3',…,Pi', where α, β are expressed as the amount of relative change in depth after correction and before correction, respectively;
since the partial mesh is not smooth enough due to the movement, the step S400 is performed on the mesh, and the mesh M is processedfc={P1',P2',P3',…,Pi' } carrying out interpolation algorithm to obtain grid image M with grid after interpolation being correctedfl={P1”,P2”,P3”…,PiAnd recovering the missing depth information in the three-dimensional real-scene modeling process, thereby recovering the stereoscopic impression of the human face. The face correction method of the invention does not increase the complexity of modeling and the requirement of equipment, and is flexible to use.
Preferably, whereinRepresenting the ith layer depth planeThe set of midpoints specifically includes:
if point PqIs located inAndpoint P betweenqAnddistances are respectively di q、If it isThen point Pq∈hiOn the contrary Pq∈hi+1That is, the set of depth planes contains points in the plane and points in the neighborhood of the plane, and the neighborhood of the depth plane is obtained asAnd calculating point sets of each layer according to the distance relationship, wherein the number of the point sets of each layer is not necessarily the same.
In one embodiment, if the moving manner of the mesh point cloud is uniform movement along the depth direction and the vertical plane direction of the depth direction, the point set H after moving along the depth direction is obtainedcThe set of points moved along the vertical plane in the depth direction is WcBecause the directions of the two movements are vertical to each other, the two movements do not interfere with each other, and the depth plane point set after the two movements is finished is Hwc=Hc+WcH and generating a mesh M from the set of depth surface pointsfc={P1',P2',P3',…,Pi'}。
Specifically, the set of points after moving in the depth direction is HcThe calculation process of (2) is as follows:
due to uniform movement, along zhSelecting a starting surfaceAnd a stop surfaceStarting surfaceAnd a stop surfaceA distance of d1sThen the distance between adjacent depth planes before correction isCorrected initial surfaceAnd the stop surfaceA distance d1sc=(1+α)d1sAt a distance of adjacent depth planes of
Setting the reference plane of movement as the starting planeAnd a stop surfaceIntermediate surface ofThe point sets on the depth plane moving in expansion in the depth direction, i.e. in the middle planeThe depth surfaces on both sides move to both sides respectively by a movement amount tsi=(tsx,tsy,tsz) Deep surface ofTranslating to a new positionDistance of movementI.e. the magnitude of the translation, the vector of the translation unit direction and the depth direction zhAre in agreement of nti=nzh=(xzh,yzh,zzh) And the amount of movement of the depth plane is obtained by the size of the translation amount and the vector of the translation unit direction as follows:
wherein t is the difference of the moving distance of the adjacent depth surfaces;
is provided withEvery layer of depth face's point uniform movement to from regional middle part to both sides removal, because even symmetrical movement, both sides displacement equals, opposite direction, adjacent layer displacement difference is t, then obtains the point set after removing and be:
wherein mid is floor (N)h/2) represents less than or equal to NhAn integer of/2, and s ═ Nh。
Similarly, the set of points moved along the vertical plane in the depth direction is WcThe calculation process of (2) is as follows:
let the direction unit vector of the depth-direction vertical plane be nxw=(xxw,yxw,zxw) Along xwSelecting a starting surfaceAnd a stop surfaceStarting surfaceAnd a stop surfaceA distance of d1wSetting the number of deep layers NwEqual interval generation of depth layersCorrected starting surfaceAnd the stop surfaceA distance d1wc=(1-β)d1w;
Setting the reference plane of movement as the starting planeAnd a stop surfaceIntermediate surface ofThe point sets on the depth plane being moved in compression in a direction perpendicular to the depth direction, i.e. in the middle planeThe depth surfaces on both sides face the middle surface respectivelyUniform movement, the amount of movement is:
wherein t' is the difference of the moving distances of the adjacent depth surfaces;
is provided withThe points of each layer of depth surface move uniformly and move from two sides of the region to the middle, because of uniform symmetrical movement, the moving distances of two sides are equal, the directions are opposite, the difference value of the moving distances of adjacent layers is t', and then the point set after movement is obtained as follows:
wherein mid is floor (N)w/2) represents less than or equal to NwAn integer of/2, and s ═ Nw。
In this embodiment, a grid M of the selected area before movement is setfThe circumscribed cube volume of the enclosed region is Vf0;
After moving in the depth direction, the area grid is MfαThe circumscribed cubic volume of the region enclosed by it is VfαThen there is Vfα=(1+α)Vf0;
After moving along the vertical plane direction of the depth direction, the area grid is MfβThe circumscribed cubic volume of the region enclosed by it is VfβThen there is Vfβ=(1-β)Vfα=(1+α)(1-β)Vf0However, in order to avoid a large change in movement and to avoid overcorrection, the volume before and after correction is restricted, that is, (1+ α) (1- β) is 1.
As a second example, if the moving manner of the mesh point cloud is moving along the normal direction, since the mesh point cloud has a normal and the normal represents the concave-convex variation, the uniform movement in the depth direction cannot restore the contour of the concave-convex variation well, and in this case, the depth moving direction needs to be changed to move along the normal direction. Normal direction npi=(nx,ny,nz) Projected in the depth direction zh and y perpendicular to zh, respectivelywPlane of formationAnd x perpendicular to zhw、ywPlane of formationAt the moment, alpha and beta refer to the relative movement proportion of the normal line in the projection plane, after the movement is finished, a new point cloud is generated in the selected area, and the point set after the movement is as follows:
and generating a mesh M from the set of depth surface pointsfc={P1',P2',P3',…,Pi'}。
Preferably, for the grid Mfc={P1',P2',P3',…,Pi' } interpolation algorithms include, but are not limited to, bilinear interpolation, cubic spline interpolation, or linear interpolation triangulation. If uniform movement is adopted, uniform bilinear interpolation is adopted; if the normal movement is adopted, a common interpolation method such as cubic interpolation, cubic spline interpolation or linear interpolation triangulation method is adopted, and a corresponding interpolation algorithm can be selected according to actual conditions.
The present invention also provides a system, wherein the system comprises a memory, a processor and a computer program stored in the memory and configured to be executed by the processor, and the processor implements the face correction method as set forth in the foregoing when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the asynchronous message processing terminal device.
The system may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the components described above are merely examples based on a system and do not constitute a limitation on a system, and that more or fewer components than described above may be included, or certain components may be combined, or different components may be included, for example, a system may also include input-output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is the control center for the device and that connects the various parts of the overall system using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement the various functions of the apparatus by running or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to usage (such as audio data, a phonebook, etc.), and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The present invention also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and the computer program realizes the human face correction method when executed.
It should be noted that the above-described embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The above examples of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. Not all embodiments are exhaustive. All obvious changes and modifications which are obvious to the technical scheme of the invention are covered by the protection scope of the invention.
Claims (10)
1. A face correction method based on three-dimensional live-action modeling is characterized by comprising the following steps:
s100, selecting a face correction area, wherein the grid of the face correction area is Mf={P1,P2,P3,…,PiZ in the depth directionhUnit vector in depth direction of nzh=(xzh,yzh,zzh);
S200, setting the number N of deep layershEqual interval generation of depth layersSet of depth planes H ═ H1,h2,h3,…,hiTherein ofRepresenting the ith layer depth planeA set of midpoints;
s300, determining a moving mode of the grid point cloud and a depth factor alpha corrected in the depth directionAnd a corrected depth factor beta in the vertical plane direction in the depth direction, and obtaining a mesh M generated after the movementfc={P1',P2',P3',…,Pi'};
S400, for grid Mfc={P1',P2',P3',…,Pi' } carrying out interpolation algorithm to obtain grid image M with grid after interpolation being correctedfl={P1”,P2”,P3”…,Pi”}。
3. The method of claim 1, wherein if the moving manner of the point cloud of the mesh point is uniform along the depth direction and the vertical plane direction of the depth direction, the point set H after moving along the depth direction is obtainedcThe set of points moved along the vertical plane in the depth direction is WcThe depth surface point set after the two-time movement is Hwc=Hc+Wc-H。
4. The face correction method of claim 3, wherein the set of points moved along the depth direction is HcThe calculation process of (2) is as follows:
along zhSelecting a starting surfaceAnd a stop surfaceStarting surfaceAnd a stop surfaceA distance of d1sThen the distance between adjacent depth planes before correction isCorrected initial surfaceAnd the stop surfaceA distance d1sc=(1+α)d1sAt a distance of adjacent depth planes of
Setting the reference plane of movement as the starting planeAnd a stop surfaceIntermediate surface ofThe point set on the depth plane expands and moves in the depth direction by a movement amount tsi=(tsx,tsy,tsz) Deep surface ofTranslating to a new positionDistance of movementThe translation unit direction vector is nti=nzh=(xzh,yzh,zzh) Thus, the amount of movement of the depth plane is obtained as:
wherein t is the difference of the moving distance of the adjacent depth surfaces;
wherein mid is floor (N)h/2), and s ═ Nh。
5. The face correction method of claim 3, wherein the set of points moved along the vertical plane in the depth direction is WcThe calculation process of (2) is as follows:
let the direction unit vector of the depth-direction vertical plane be nxw=(xxw,yxw,zxw) Along xwSelecting a starting surfaceAnd a stop surfaceStarting surfaceAnd a stop surfaceA distance of d1wThen correct the initial surfaceAnd the stop surfaceA distance d1wc=(1-β)d1w;
Setting the reference plane of movement as the starting planeAnd a stop surfaceIntermediate surface ofThe point set on the depth plane is compressed and moved in the direction perpendicular to the depth direction by the following amount:
wherein t' is the difference of the moving distances of the adjacent depth surfaces;
wherein mid is floor (N)w/2), and s ═ Nw。
6. The face correction method of claim 1, wherein the grid M of the selected area before moving is setfThe circumscribed cube volume of the enclosed region is Vf0;
After moving in the depth direction, the area grid is MfαThe circumscribed cubic volume of the region enclosed by it is VfαThen there is Vfα=(1+α)Vf0;
After moving along the vertical plane direction of the depth direction, the area grid is MfβThe circumscribed cubic volume of the region enclosed by it is VfβThen, there are: vfβ=(1-β)Vfα=(1+α)(1-β)Vf0Wherein (1+ α) (1- β) ═ 1.
7. The method of claim 1, wherein if the movement of the point cloud is along the normal direction, the normal direction n is adjustedpi=(nx,ny,nz) Respectively projected to the depth direction zhTo the verticalAt zhY of (A) to (B)wPlane of formationAnd perpendicular to zhX ofw、ywPlane of formationThe set of points after the move is:
8. the face correction method of claim 1, characterized in that the mesh M is alignedfc={P1',P2',P3',…,Pi' } interpolation algorithms include, but are not limited to, bilinear interpolation, cubic spline interpolation, or linear interpolation triangulation.
9. A system comprising a memory, a processor, and a computer program stored in the memory and configured to be executed by the processor, when executing the computer program, implementing the face correction method of any one of claims 1-8.
10. A computer-readable storage medium, in which a computer program is stored, which, when executed, implements the face correction method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110409028.0A CN113112606B (en) | 2021-04-16 | 2021-04-16 | Face correction method, system and storage medium based on three-dimensional live-action modeling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110409028.0A CN113112606B (en) | 2021-04-16 | 2021-04-16 | Face correction method, system and storage medium based on three-dimensional live-action modeling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113112606A true CN113112606A (en) | 2021-07-13 |
CN113112606B CN113112606B (en) | 2023-05-30 |
Family
ID=76717919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110409028.0A Active CN113112606B (en) | 2021-04-16 | 2021-04-16 | Face correction method, system and storage medium based on three-dimensional live-action modeling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113112606B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1745442A1 (en) * | 2004-02-18 | 2007-01-24 | Laurence Marzell | Adaptive 3d image modelling system and apparatus and method thereof |
CZ2006283A3 (en) * | 2006-05-02 | 2007-11-14 | Zeman@Miroslav | Structural building element |
US20160196467A1 (en) * | 2015-01-07 | 2016-07-07 | Shenzhen Weiteshi Technology Co. Ltd. | Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud |
CN109979013A (en) * | 2017-12-27 | 2019-07-05 | Tcl集团股份有限公司 | Three-dimensional face chart pasting method and terminal device |
-
2021
- 2021-04-16 CN CN202110409028.0A patent/CN113112606B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1745442A1 (en) * | 2004-02-18 | 2007-01-24 | Laurence Marzell | Adaptive 3d image modelling system and apparatus and method thereof |
CZ2006283A3 (en) * | 2006-05-02 | 2007-11-14 | Zeman@Miroslav | Structural building element |
US20160196467A1 (en) * | 2015-01-07 | 2016-07-07 | Shenzhen Weiteshi Technology Co. Ltd. | Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud |
CN109979013A (en) * | 2017-12-27 | 2019-07-05 | Tcl集团股份有限公司 | Three-dimensional face chart pasting method and terminal device |
Non-Patent Citations (1)
Title |
---|
王金伟 等: "三维人脸姿态校正研究", 《科技资讯》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113112606B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109658365B (en) | Image processing method, device, system and storage medium | |
CN110889890B (en) | Image processing method and device, processor, electronic equipment and storage medium | |
CN103021017B (en) | Three-dimensional scene rebuilding method based on GPU acceleration | |
WO2018119889A1 (en) | Three-dimensional scene positioning method and device | |
US9041711B1 (en) | Generating reduced resolution textured model from higher resolution model | |
CN108765351A (en) | Image processing method, device, electronic equipment and storage medium | |
CN109147025B (en) | RGBD three-dimensional reconstruction-oriented texture generation method | |
CN112055213B (en) | Method, system and medium for generating compressed image | |
TW202309834A (en) | Model reconstruction method, electronic device and computer-readable storage medium | |
CN115439616B (en) | Heterogeneous object characterization method based on multi-object image alpha superposition | |
CN113139992A (en) | Multi-resolution voxel gridding | |
CN110378948B (en) | 3D model reconstruction method and device and electronic equipment | |
CN109816765B (en) | Method, device, equipment and medium for determining textures of dynamic scene in real time | |
US10902554B2 (en) | Method and system for providing at least a portion of content having six degrees of freedom motion | |
CN113112606B (en) | Face correction method, system and storage medium based on three-dimensional live-action modeling | |
KR101163020B1 (en) | Method and scaling unit for scaling a three-dimensional model | |
CN112150621B (en) | Bird's eye view image generation method, system and storage medium based on orthographic projection | |
CN101566784A (en) | Method for establishing depth of field data for three-dimensional image and system thereof | |
CN111325662A (en) | Method for generating 3D space house type model based on spherical projection panoramic image | |
CN110689602A (en) | Three-dimensional face reconstruction method, device, terminal and computer readable storage medium | |
CN114820923B (en) | High-precision multi-view point cloud reconstruction method and device based on differential projection | |
CN113034671B (en) | Traffic sign three-dimensional reconstruction method based on binocular vision | |
CN104346822A (en) | Texture mapping method and device | |
CN107845130A (en) | A kind of surrounding three-dimensional reconstructing method | |
CN113888633B (en) | Three-dimensional reconstruction method and device based on real-time positioning and mapping algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |