CN102542601A - Equipment and method for modeling three-dimensional (3D) object - Google Patents

Equipment and method for modeling three-dimensional (3D) object Download PDF

Info

Publication number
CN102542601A
CN102542601A CN2010106019130A CN201010601913A CN102542601A CN 102542601 A CN102542601 A CN 102542601A CN 2010106019130 A CN2010106019130 A CN 2010106019130A CN 201010601913 A CN201010601913 A CN 201010601913A CN 102542601 A CN102542601 A CN 102542601A
Authority
CN
China
Prior art keywords
image
depth
coloured image
template
deformation process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010106019130A
Other languages
Chinese (zh)
Inventor
张辉
孙迅
焦少辉
金庸善
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN2010106019130A priority Critical patent/CN102542601A/en
Publication of CN102542601A publication Critical patent/CN102542601A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides equipment and a method for modeling a three-dimensional (3D) object. By the equipment and the method, a depth image and a multi-view-angle color image can be fused, so that the 3D object is automatically modeled on the basis of a predefined 3D object template. The equipment comprises an image receiving unit, a 3D template providing unit and an object modeling unit, wherein the image receiving unit is used for receiving the shot color image and depth image; the color image comprises a positive color image and at least two lateral color images at other view angles; the depth image is a positive depth image; the 3D template providing unit is used for providing the 3D object template predefined for a specific type of object; the 3D object template has a shape feature and a texture feature; and the object modeling unit is used for deforming the 3D object template provided by the 3D template providing unit based on the color image and the depth image which are received by the image receiving unit, and performing texture synthesis on the color image and the deformed 3D object template to generate a 3D object model.

Description

A kind of equipment and method that is used for the 3D object modeling
Technical field
The present invention relates to three-dimensional (3D) object modeling technique, particularly, relate to a kind of effective equipment and the method that can carry out the 3D object modeling based on predefined 3D object template automatically.
Background technology
In three-dimensional (3D) area of computer graphics, the 3D object modeling is meant the processing that the arithmetic of any three dimensional object of generation is represented, the result that this processing produces is called the 3D object model.
The 3D object modeling has in fields such as digital entertainments widely to be used, and for example, in technology such as virtual reality, augmented reality, 3D film, 3D recreation, man-machine interaction, all need carry out the 3D object modeling.Present most 3D object modelings also need carry out manual operation through the relevant speciality personnel and realize that this mode not only expends time in, and need a large amount of artificial participations.
On the other hand, in prior art, if the limited amount of input picture then can't provide enough shape constrainings to the object of 3D modeling about automatic 3D object modeling.For example; In No. 10927241 United States Patent (USP) " Method and apparatus for image-based photorealistic 3D face modeling "; Almost the constraint of vpg connection can't be provided for the cheek part, and confirm by shape template basically.
In addition, simple two dimensional image/video data can't be to controlled illumination environment or the less subject area of texture do not provide good shape information.In order to address this problem, a lot of prior aries utilize statistical model to improve the modeling quality.Yet obtaining of statistical model requires a great deal of time and cost, and all corresponding statistical model need be provided to every kind of different object type, and therefore, statistical model is restricted on using owing to lack convenience.
Utilizing depth image to carry out under the situation of 3D object modeling, because depth image self is more sparse, so need carry out SUPERRESOLUTION PROCESSING FOR ACOUSTIC to improve the modeling quality to the data of depth image.Yet; Existing SUPERRESOLUTION PROCESSING FOR ACOUSTIC (for example; No. 11444947 United States Patent (USP) " Method and system to increase X-Y resolution in a depth camera using RGB ") all only adopt to insert in local and handle, this shape that often causes generating is not sufficiently oily and has error with the object of reality.
In automatic 3D object modeling technique, use nonlinear optimization through regular meeting to the modeling of carrying out based on image/video/depth image and handle.Yet said nonlinear optimization is handled the defective that himself is but arranged, that is, do not have preferably robustness and need extraordinary initial value can realize preferable performance.In addition, optimization process need be carried out interative computation usually, so speed is very slow.
Therefore, a kind of effective equipment and the method that can carry out the 3D object modeling based on predefined 3D object template automatically need be provided.
Summary of the invention
The object of the present invention is to provide a kind of effective equipment and the method that can carry out the 3D object modeling based on predefined 3D object template automatically; According to said 3D object modeling equipment and method; Can be out of shape predefined template based on multi-view image and depth image; And generate corresponding texture, thereby carry out the 3D object modeling effectively.
According to an aspect of the present invention; A kind of equipment of the 3D of being used for object modeling is provided; Said equipment comprises: image receiving unit is used to receive the coloured image and the depth image of shooting, wherein; Coloured image comprises the side direction coloured image at forward coloured image and at least two other visual angles of the width of cloth, and depth image is the forward depth image; The 3D template provides the unit, is used to provide to the object of particular types and predefined 3D object template, and wherein, said 3D object template has shape facility and textural characteristics; The object modeling unit; Be used for the 3D object template that provides the unit to provide by the 3D template being carried out deformation process based on the coloured image and the depth image that receive by image receiving unit; And the 3D object template after coloured image and the deformation process is carried out texture synthesize, to generate the 3D object model.
Said object modeling unit can comprise: characteristic extracting module is used for extracting the characteristic information that is used for confirming object shapes from coloured image; The prospect separation module is used for isolating interested foreground image through producing the foreground mask image from forward coloured image and forward depth image; The SUPERRESOLUTION PROCESSING FOR ACOUSTIC module is used to carry out SUPERRESOLUTION PROCESSING FOR ACOUSTIC to produce the dense depth corresponding to the forward coloured image; The deformation process module is used for based on the characteristic information that is extracted by characteristic extracting module and by the dense depth that the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module produces the 3D object template that provides the unit to provide by the 3D template being carried out deformation process; The texture generation module is used for that the 3D object template after coloured image and the deformation process is carried out texture and synthesizes, to generate the 3D object model.
The prospect separation module carries out foreground segmentation to obtain foreground mask to the forward depth image; Then; Utilize known calibrating parameters, will with the corresponding degree of depth pixel projection of foreground mask to the forward color image planes, gather on the forward coloured image, to form sparse 2D point; Fill in gap between the said sparse 2D point, on the forward coloured image, to form corresponding foreground image.
The utilization of prospect separation module is scratched the picture technology refinement is carried out on the border of the foreground image that obtains.
The SUPERRESOLUTION PROCESSING FOR ACOUSTIC module with the pixel projection in the depth image to the forward color image planes; To obtain the seed points that has depth value in the forward coloured image pixel; Carry out local interpolation arithmetic based on seed points of obtaining and forward coloured image, thereby produce dense depth, said dense depth is quantized corresponding to the forward coloured image; And the dense depth execution global optimization to quantizing, to improve its flatness.
When the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module was carried out local interpolation arithmetic, it utilized following equality to confirm the depth value of the interpolation pixel in the forward coloured image:
d p = Σ q ∈ N ( p ) W pq × d q / Σ q ∈ N ( p ) W pq
W Pq=exp ((1-α (Δ Ω)) * color_diff-spatial_diff), wherein,
α(ΔΩ)=1/(1+e -ε×(ΔΩ-τ))
In above-mentioned equality, p representes the arbitrary interpolation pixel in the forward coloured image, the neighborhood of N (p) remarked pixel p, and q representes the arbitrary seed points among the neighborhood N (p), d pThe depth value of remarked pixel p, d qThe depth value of expression seed points q, W PqExpression seed points q is with respect to the degree of depth weight of pixel p; Δ Ω representes among the neighborhood N (p) poor between the maximum depth value and minimum depth value; Color_diff and spatial_diff represent colored approximation and the spatial neighbor degree from seed points q to the interpolation pixel p respectively, and ε and τ are respectively the positive constants definite through experiment.
Pixel p and neighborhood N (p) thereof are limited in the isolated foreground image of prospect separation module.
The SUPERRESOLUTION PROCESSING FOR ACOUSTIC module when the discrete dense depth that obtains is carried out global optimization, the dense depth that is optimized through the set of finding objective function E below making to obtain the pixel p of minimum value:
E=∑ p(E data+E smooth)
Wherein, E DataBe the data cost, indication is about the conforming measured value between ultimate depth figure and the input depth map; E SmoothBe level and smooth cost, indication is about the conforming measured value of the degree of depth in the neighborhood.
Said deformation process module is carried out global change, forward distortion, the renewal of forward border, degree of depth interpolation, point renewal and 3D interpolation operation to said 3D object template successively when carrying out deformation process.
When carrying out global change; The deformation process module is based on 3D object template that is provided the unit to provide by the 3D template and the corresponding relation of dense depth between some key point that produced by the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module; Said 3D object template is carried out global change, so that the dense depth approximate match of itself and said optimization.
When carrying out positive-going transition, the deformation process module is carried out the computing of 2D data interpolating based on the 3D object template of the characteristic information that is extracted from the forward coloured image by characteristic extracting module after to global change.
Carrying out the forward border when upgrading, the deformation process module is upgraded the forward border the 3D object template after the positive-going transition based on the characteristic information that is extracted from the side direction coloured image by characteristic extracting module.
The texture generation module utilizes color conversion mechanism to carry out colored calibration to the side direction coloured image with through the 3D object template after the deformation process; So that they are consistent with the forward coloured image on tone; Be mapped to the texture coordinate plane with the forward coloured image and through the side direction coloured image after the colored calibration; And the texture of the texture of side direction coloured image and forward coloured image merged; The result that will merge again merges with the texture of the 3D object template of calibrating through colour, thereby exports final 3D object model.
According to a further aspect in the invention; A kind of method of the 3D of being used for object modeling is provided, and said method comprises: receive coloured image and the depth image taken, wherein; Coloured image comprises the side direction coloured image at forward coloured image and at least two other visual angles of the width of cloth, and depth image is the forward depth image; Receive to the object of particular types and predefined 3D object template, wherein, said 3D object template has shape facility and textural characteristics; Based on coloured image and depth image the 3D object template that receives is carried out deformation process, and the 3D object template after coloured image and the deformation process is carried out texture synthesize, to generate the 3D object model.
Based on coloured image and depth image the 3D object template that receives is carried out deformation process, and the 3D object template after coloured image and the deformation process is carried out the synthetic step of texture can comprise: extract the characteristic information that is used for definite object shapes from coloured image; Isolate interested foreground image through producing the foreground mask image from forward coloured image and forward depth image; Carry out SUPERRESOLUTION PROCESSING FOR ACOUSTIC to produce dense depth corresponding to the forward coloured image; Based on the characteristic information that extracts and the dense depth of generation the 3D object template is carried out deformation process; 3D object template after coloured image and the deformation process is carried out texture synthesize, to generate the 3D object model.
Description of drawings
Through the description of carrying out below in conjunction with accompanying drawing to embodiment, above-mentioned and/or other purpose of the present invention and advantage will become apparent, wherein:
Fig. 1 illustrates the block diagram of 3D object modeling equipment according to an exemplary embodiment of the present invention;
Fig. 2 illustrates and is used to take the device of coloured image and depth image according to an exemplary embodiment of the present invention;
Fig. 3 illustrates and utilizes the coloured image that device shown in Figure 2 photographs and the example of depth image;
Fig. 4 illustrates the example of 3D object template according to an exemplary embodiment of the present invention;
Fig. 5 illustrates the exemplary detailed structure of the object modeling unit in the 3D object modeling equipment shown in Figure 1;
Fig. 6 illustrates the characteristic information that is extracted by the characteristic extracting module in the object modeling unit according to an exemplary embodiment of the present;
Fig. 7 illustrates the operational flowchart of the prospect separation module in the object modeling unit according to an exemplary embodiment of the present invention;
Fig. 8 illustrates the operational flowchart of the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module in the object modeling unit according to an exemplary embodiment of the present invention;
Fig. 9 illustrates the operational flowchart of the deformation process module in the object modeling unit according to an exemplary embodiment of the present invention;
Figure 10 illustrates the operational flowchart of the texture generation module in the object modeling unit according to an exemplary embodiment of the present invention;
Figure 11 illustrates the example as a result of carrying out the 3D object modeling according to an exemplary embodiment of the present; And
Figure 12 illustrates according to an exemplary embodiment of the present 3D object modeling equipment shown in Figure 1 is used to produce the example of the 3D of omnidirectional object model in combination.
Embodiment
At present will be in detail with reference to embodiments of the invention, the example of said embodiment is shown in the drawings, and wherein, identical label refers to identical parts all the time.Below will be through said embodiment is described with reference to accompanying drawing, so that explain the present invention.
Fig. 1 illustrates the block diagram of 3D object modeling equipment according to an exemplary embodiment of the present invention.As shown in Figure 1; 3D object modeling equipment comprises according to an exemplary embodiment of the present invention: image receiving unit 100; Be used to receive the coloured image and the depth image of shooting; Wherein, coloured image comprises the side direction coloured image at forward coloured image and at least two other visual angles of the width of cloth, and depth image is the forward depth image; The 3D template provides unit 200, is used to provide to the object of particular types and predefined 3D object template, and wherein, said 3D object template has shape facility and textural characteristics; Object modeling unit 300; Be used for the 3D object template that provides unit 200 to provide by the 3D template being carried out deformation process based on the coloured image and the depth image that receive by image receiving unit 100; And the 3D object template after coloured image and the deformation process is carried out texture synthesize, to generate the 3D object model.
According to exemplary embodiment of the present invention; The input picture that is used to carry out the 3D object modeling can comprise various visual angles coloured image and depth image; For example, coloured image can comprise the side direction coloured image at forward coloured image and at least two other visual angles of the width of cloth, and depth image is the forward depth image.As an example, filming apparatus shown in Figure 2 capable of using obtains coloured image and depth image.As shown in Figure 2; Can three shooting cameras with color sensor be arranged in dead ahead, left side and the right side of being clapped object respectively; The direct picture and two width of cloth lateral view pictures that are used for reference object; Simultaneously also be furnished with depth transducer, be used for the forward depth image of reference object at the shooting camera place in dead ahead.Preferably, the coverage of three shooting cameras surpasses 180 degree visual angles.Fig. 3 illustrates and utilizes the coloured image that device shown in Figure 2 photographs and the example of depth image.Particularly, Fig. 3 top illustrates forward coloured image, left side coloured image and the right side coloured image that the device of Fig. 2 photographs, and Fig. 3 below illustrates the forward depth image that the device of Fig. 2 photographs.
The 3D object template is predefined for the object (for example, people's face) that is directed against particular types according to an exemplary embodiment of the present invention.For example, Fig. 4 illustrates according to an exemplary embodiment of the present and to the example of the predefined 3D object template of people's face.As shown in Figure 4; The 3D object template comprises shape and texture two aspects according to an exemplary embodiment of the present invention; Fig. 4 top illustrates the 3D object template shape of seeing from different visual angles, and Fig. 4 below illustrates the texture (left side) of template and the template (right side) of veining.
Based on above-mentioned general plotting of the present invention; Can adopt variety of way to make up concrete object modeling unit; A kind of exemplary detailed structure of the object modeling unit 300 in the 3D object modeling equipment shown in Figure 1 below will be described with reference to Fig. 5; Wherein, said object modeling unit 300 comprises: characteristic extracting module 10 is used for extracting the characteristic information that is used for confirming object shapes from coloured image; Prospect separation module 20 is used for isolating interested foreground image through producing the foreground mask image from forward coloured image and forward depth image; SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 is used to carry out SUPERRESOLUTION PROCESSING FOR ACOUSTIC to produce the dense depth corresponding to the forward coloured image; Deformation process module 40 is used for based on the characteristic information that is extracted by characteristic extracting module 10 and by the dense depth that SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 produces the 3D object template that provides unit 200 to provide by the 3D template being carried out deformation process; Texture generation module 50 is used for that the 3D object template after coloured image and the deformation process is carried out texture and synthesizes, to generate the 3D object model.
Below, will be with the example of people's face as object, to Figure 11 the object modeling unit in the 3D object modeling equipment according to an exemplary embodiment of the present invention how realized is described with reference to Fig. 6.
Particularly; Characteristic extracting module 10 in the object modeling unit 300 is used for extracting the characteristic information that is used for confirming object shapes from coloured image; Here, characteristic extracting module 10 can adopt various feature extraction mode of the prior art from the coloured image of taking, to extract the characteristic information that can be used for confirming object shapes.Characteristic information by characteristic extracting module 10 is extracted can be as shown in Figure 6, comprises key point (like eyebrow, eyes, nose, mouth, border etc.) and the object outline on the side direction coloured image on the forward coloured image.Those skilled in the art should know, and for different types of object, the concrete characteristic information that extracts is also inequality, yet various extraction algorithms all can be applicable to the present invention and are not limited to this object of people's face.
Below, the concrete operations that will come the prospect separation module 20 in the description object modeling unit 300 with reference to Fig. 7.As shown in Figure 7, at step S21,20 pairs of forward depth images of prospect separation module carry out foreground segmentation, for example, the forward depth image are carried out existing K means clustering algorithm to obtain the foreground mask on the forward depth image.Then; At step S22; Prospect separation module 20 utilize known calibrating parameters (wherein, said calibrating parameters depend on each take between camera relative position relation and towards and each take the projection property of camera self, those skilled in the art can obtain said calibrating parameters through variety of way); Will with the corresponding 3D spot projection of the depth image pixel of foreground mask to the forward color image planes, gather on the forward coloured image, to form sparse 2D point.Then, at step S23, the gap between the sparse 2D point that forms on the forward coloured image is filled, on the forward coloured image, to form foreground mask.As an example, can use hole filling technique of the prior art (for example carrying out form corrosion (morphological dilation followed by morphological erosion) after the morphological dilatation) to fill.Then, as optional step, at step S24, prior art capable of using (for example, scratch picture technology) is carried out refinement to the border of the foreground mask that obtains at step S23.Should note: under the enough situation of the precision of the foreground mask that step S23 obtains, the operation that can omit step S24 fully.
Because the pixel distribution of forward depth image itself is very sparse; Therefore SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 will be carried out SUPERRESOLUTION PROCESSING FOR ACOUSTIC (that is up-sampling) to the forward depth image and produce the dense depth corresponding to forward coloured image (or prospect part wherein).Below will come the concrete operations of the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 in the description object modeling unit 300 with reference to Fig. 8.As shown in Figure 8; At step S31; 30 pairs of SUPERRESOLUTION PROCESSING FOR ACOUSTIC modules project to the degree of depth pixel of forward coloured image and carry out the 3D shear treatment; Particularly; SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 is utilized known calibrating parameters (wherein; Said calibrating parameters depend on each take between camera relative position relation and towards and each take the projection property of camera self; Those skilled in the art can obtain said calibrating parameters through variety of way) will with the corresponding 3D spot projection of whole pixels of forward depth image (comprising the whole pixels in foreground mask part that prospect separation module 20 separates and the remaining background parts) or partial pixel (that is, the foreground mask part that prospect separation module 20 separates being done the pixel that expansive working is obtained by fixed constant) to the forward color image planes, gather with the 2D point that formation on the forward coloured image is sparse.At this moment, these 2D points of projection also have corresponding depth value thus, thereby are called as " seed points ".Then; Can based on the isolated foreground mask part of prospect separation module 20 and background parts to these " seed points " execution block processing; For example: if the background dot (that is, the point of non-foreground mask part) of certain seed points from the forward depth image shines upon and surrounded by other prospect seed points (that is, from the foreground mask part mapping); Then said seed points is regarded as being blocked, and from sparse 2D point set, this seed points is given up.
Then, at step S32, SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 is based on the seed points and the forward coloured image that obtain among the step S31 and carries out local interpolation arithmetic (that is, carrying out the interpolation arithmetic of depth value for all pixels), thereby obtains rough dense depth.Here, also can the local interpolation computing be only limited to prospect separation module 20 isolated foreground mask parts.Generally, the local interpolation computing can be based on following two hypothesis: (a) surface of object is level and smooth piecewise; (b) around certain zone, have similar color pixel and have identical depth value probably.Therefore, exemplary embodiment of the present has adopted a kind of improved interpolation operation based on two-sided filter, is used for adjusting adaptively the weight of the depth value of seed points with respect to the depth value of surrounding pixel.Particularly, can confirm the depth value of the interpolation pixel in the forward coloured image through equality (1) to (3):
d p = Σ q ∈ N ( p ) W pq × d q / Σ q ∈ N ( p ) W pq - - - ( 1 )
W Pq=exp ((1-α (Δ Ω)) * color_diff-spatial_diff) (2), wherein,
α(ΔΩ)=1/(1+e -ε×(ΔΩ-τ)) (3)
In above-mentioned equality, p representes the arbitrary interpolation pixel in the forward coloured image, the neighborhood of N (p) remarked pixel p (for example, being the rectangle or the border circular areas with predetermined size at center) with the pixel p, and q representes the arbitrary seed points among the neighborhood N (p).d pThe depth value of remarked pixel p, d qThe depth value of expression seed points q, W PqExpression seed points q is with respect to the degree of depth weight of pixel p.Δ Ω representes among the neighborhood N (p) poor between the maximum depth value and minimum depth value, and color_diff and spatial_diff represent colored approximation and the spatial neighbor degree from seed points q to impact point p respectively.ε and τ are respectively the positive constants definite through experiment, and wherein, the user can adjust the value of ε and τ according to different designing requirements.Should note: can pixel p and neighborhood N (p) thereof here be limited to prospect separation module 20 isolated foreground mask parts.Through above-mentioned processing, SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 has obtained rough dense depth.
Then, in order further to improve modeling precision, as optional step, at step S33, SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 can be carried out aftertreatment to the rough dense depth that in step S32, obtains.As an example, the aftertreatment of adopting usually comprises the reduction noise and fills hole.For noise reduction aspect, can adopt medium filtering; For hole filling aspect, can use with step S32 in similarly based on the interpolation algorithm of two-sided filter, be directed against be in the adjacent domain have a few, and be not only seed points.
Next; At step S34; 30 pairs of rough dense depth that obtain at step S32 of SUPERRESOLUTION PROCESSING FOR ACOUSTIC module (or the dense depth that obtains at step S33) are carried out meet again type (reclustering) and are handled; For example, can use with step S21 in similarly the K means clustering algorithm obtain accurate more foreground mask, to be used to carry out subsequent treatment.
Then, at step S35, the depth map that SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 will obtain in step S32 or S33 is quantified as discrete value, thereby produces discrete depth map.For this reason, can adopt various quantification means of the prior art.As optimal way, also can only carry out quantification treatment, thereby improve treatment effeciency the corresponding depth map of foreground mask part that in step S34, obtains.
At step S37,20 couples of discrete depths figure that obtain at step S35 of SUPERRESOLUTION PROCESSING FOR ACOUSTIC module carry out global optimization, to improve its flatness.As an example, can obtain separate (that is, the set of pixel p) of minimum value through finding objective function E below making, as Optimization result:
E=∑ p(E data+E smooth) (4)
Wherein, E DataBe the data cost, indication is about the conforming measured value between (integral body or prospect) depth map of final (integral body or prospect) depth map and input; E SmoothBe level and smooth cost, indication is about the conforming measured value of the degree of depth in the neighborhood.The algorithm that can adopt the algorithm (for example, with the distance of depth map between the projection on the direct picture of final depth map and input as the data cost) of various computational data costs of the prior art and calculate level and smooth cost obtains above-mentioned E DataAnd E SmoothThrough above-mentioned processing, be different from the processing mode of available technology adopting local optimum, the present invention is through minimizing level and smooth cost and keep the accurate projection on the positive apparent direction and discrete depth map is carried out global optimization, thereby significantly strengthened flatness.Should be appreciated that existing various optimizers capable of using are found the solution above-mentioned objective function E.As an example, consider these two aspects of validity and accuracy, can adopt classification-BP (put letter and transmit Belief Propagation) optimal way to find the solution.
At last, at step S38,30 couples of global optimization results that obtain at step S37 of SUPERRESOLUTION PROCESSING FOR ACOUSTIC module carry out deeply and handle, and for example, convert discrete depth value into floating point values through existing conic fitting technology.
Carry out SUPERRESOLUTION PROCESSING FOR ACOUSTIC with after producing dense depth in SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30 corresponding to the forward coloured image; Deformation process module 40 is based on characteristic information that is extracted by characteristic extracting module 10 and the dense depth that produced by SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30; Utilize known calibrating parameters (wherein; Said calibrating parameters depend on each take between camera relative position relation and towards and each take the projection property of camera self, those skilled in the art can obtain said calibrating parameters through variety of way) the 3D object template that provides unit 200 to provide by the 3D template is carried out deformation process.The following concrete operations of describing the deformation process module 40 in the object modeling unit 300 according to an exemplary embodiment of the present invention with reference to Fig. 9.
As shown in Figure 9; At step S410; Deformation process module 40 is based on (providing unit 200 to provide by the 3D template) predefine 3D object template and (being produced by SUPERRESOLUTION PROCESSING FOR ACOUSTIC module 30) dense depth corresponding relation between some key point; Said 3D object template is carried out global change, so that itself and said dense depth approximate match.As an example; Can at first confirm three zoom factors according to experience; For example; The zoom factor of along continuous straight runs is based on the distance between two centers, and zoom factor vertically is based on the vertical range between eyes and the mouth, is the mean value of above-mentioned two distances along the zoom factor of fore-and-aft direction.Then, existing 3D registration Algorithm capable of using is estimated the similarity conversion, wherein, and can be with the part in the characteristic information shown in Figure 6 as the input data in the 3D registration Algorithm.
Then, at step S420,40 pairs of deformation process modules are carried out forward at step S410 through the 3D object template after the global change and are out of shape.Particularly, deformation process module 40 can be based on from the characteristic information (unique point on the forward coloured image as shown in Figure 6) that the forward coloured image extracts the 3D object template after step S410 process global change being carried out the computing of 2D data interpolating by characteristic extracting module 10.Those skilled in the art should know: various data interpolating schemes of the prior art all can be applicable among the present invention.As an example, can adopt interpolation technique to carry out the computing of 2D data interpolating based on RBF (Radial Basis Function).At this moment, corresponding depth value remains unchanged in the 3D object template.
At step S430, the 3D coordinate of the forward frontier point (that is the outer boundary at cheek in people's face and chin place) in the 3D object template of 40 pairs of step S420 outputs of deformation process module upgrades.Particularly, deformation process module 40 can be based on upgrading the forward frontier point the 3D object template by characteristic extracting module 10 from the characteristic information (profile on the side direction coloured image as shown in Figure 6) that the side direction coloured image extracts.For example; The forward frontier point is along orthogonal projection direction distortion (that is, keeping their orthogonal projection result), and the distance that moves is calculated (promptly based near the model vertices of blocking certainly two; Calculating from said summit to image outline (for example; Image outline in two width of cloth side direction coloured images shown in Figure 6) distance, and confirm the displacement of forward frontier point by the mean value of weighting, weight wherein and forward frontier point and be inversely proportional to from the distance of blocking between the model vertices).
Then, at step S440,40 pairs of deformation process modules are carried out interpolation operation through the depth value of being had a few in the 3D object template of forward distortion in step S420.For example, deformation process module 40 depth value that is based on the forward frontier point that step S430 obtains carries out interpolation to the depth value of being had a few in the said 3D object template.Those skilled in the art should know: various data interpolating schemes of the prior art all can be applicable among the present invention.As an example, can adopt interpolation technique to come depth value is carried out interpolation arithmetic once more based on RBF.
At step S450,40 pairs of deformation process modules are carried out point at step S440 through the 3D object template after the degree of depth interpolation and are upgraded.Particularly, upgrade from the 3D coordinate that blocks model vertices (that is, the 3D object template of step S440 output see projected outline) based on image outline in both sides.For example, can intersect up to itself and image outline along its normal direction distortion from blocking model vertices said.
At last, at step S460, the forward border vertices that deformation process module 40 is upgraded based on depth data, at step S430 and on profile summit that step S450 upgrades the 3D object template to step S440 output carries out the 3D interpolation arithmetic.Those skilled in the art should know: various data interpolating schemes of the prior art all can be applicable among the present invention.As an example, can adopt interpolation technique to carry out the 3D interpolation arithmetic once more based on RBF.
In fact, the forward border is very approaching with blocking model vertices certainly, does not particularly have under the situation of very dense at the 3D object template.Therefore, as other a kind of alternative, step S450 can omit, at this moment, step S460 will be only when the 3D object template to step S440 output carries out the 3D interpolation arithmetic based on depth data and the forward border vertices upgraded at step S430.
After deformation process module 40 is carried out deformation process based on characteristic information and dense depth to the 3D object template that provides as stated; Texture generation module 50 known calibrating parameters capable of using (wherein; Said calibrating parameters depend on each take between camera relative position relation and towards and each take the projection property of camera self; Those skilled in the art can obtain said calibrating parameters through variety of way); To the coloured image that receives by image receiving unit 100 with to carry out texture through the 3D object template after the deformation process synthetic, to generate the 3D object model.The concrete operations of the texture generation module 50 in the object modeling unit 300 according to an exemplary embodiment of the present invention below will be described with reference to Figure 10.
Shown in figure 10, at step S510, texture generation module 50 utilizes existing color conversion mechanism to carrying out colored calibration through 3D object template after the deformation process and side direction coloured image, so that they are consistent with the forward coloured image on tone.
Then, at step S520,50 pairs of coloured images of texture generation module carry out texture.For example, texture generation module 50 simple raster algorithms capable of using are mapped to the texture coordinate plane based on texture coordinate and picture position with the forward coloured image and through the side direction coloured image after the colored calibration.
At step S530; Texture generation module 50 merges the texture of two width of cloth side direction coloured images and the texture of forward coloured image; Then; At step S540, texture generation module 50 will merge through the template texture of colored calibration and the texture of step S530 generation at step S510, thereby export final 3D object model.The step S530 of above-mentioned merging texture and S540 also can merge into an independent step, in merging, can use existing P oisson editing technique.
Above-mentioned according to an exemplary embodiment of the present invention 3D object modeling equipment and method have utilized various visual angles coloured image and depth image to carry out the 3D object modeling based on predefined 3D object template; Wherein, The fusion of various visual angles coloured image and depth image helps to strengthen the validity of 3D object modeling; And do not need artificial participation, can being applied to not well, the controlled illumination environment also solves the situation that lacks texture well.In addition, when carrying out SUPERRESOLUTION PROCESSING FOR ACOUSTIC, adopt the global optimization that is not used in prior art, thereby improved the flatness of modeling result.And the mode that the fusion that the present invention proposes various visual angles coloured image carries out deformation process to the predefine template does not need statistical model or nonlinear optimization to handle, thereby has strengthened the efficient of handling greatly.Figure 11 illustrates the example as a result of carrying out the 3D object modeling according to an exemplary embodiment of the present.Shown in figure 11; The top illustrates the rendering result (left side) of the people's face that carries out modeling according to the present invention and the texture image (right side) that produces; The centre illustrates the rendering result of smooth-shaped; The below illustrates the point after the modeling, and it is projected to three width of cloth input pictures of shooting, can find out that modeling result can mate with the image of input well according to an exemplary embodiment of the present invention.
The modeling object of treating for more complicated; Can 3D object modeling equipment shown in Figure 1 be applied in combination; That is,, on each direction, carry out the 3D object modeling respectively to the multi-angle view through the view at each visual angle of multi-direction shooting; Modeling result with all directions carries out the shape registration then, to generate last 3D object model.Shown in figure 12, take as an example to carry out various visual angles from the front and back both direction, can realize the 3D of the omnidirectional model of 360 degree.
It should be noted that the 3D object-oriented modeling method can be applied to relevant three dimensional computer graphics disposal system (comprising the generating apparatus, photo synthesizer, scene generating apparatus or other stunt generating apparatus that are used for animation or virtual image) with equipment according to an exemplary embodiment of the present invention.In above-mentioned three dimensional computer graphics disposal system; Except 3D object modeling equipment according to an exemplary embodiment of the present invention; Also comprise corresponding mould processing unit and rendering unit, wherein, said mould processing unit is used for the 3D object model that generates is carried out the figure processed; And said rendering unit is used for the result of processed is played up, to export synthetic graphic result.Because these unit all belong to the prior art beyond the present invention, therefore, obscure for fear of theme of the present invention is caused, and are not elaborated at this.
Above each embodiment of the present invention only is exemplary, and the present invention is not limited to this.Those skilled in the art should understand that: anyly relate to the fusion that utilizes various visual angles coloured image and depth image, the mode of carrying out the 3D object modeling based on predefined 3D object template all falls among the scope of the present invention.Under the situation that does not break away from principle of the present invention and spirit, can change these embodiments, wherein, scope of the present invention limits in claim and equivalent thereof.

Claims (12)

1. equipment that is used for the 3D object modeling, said equipment comprises:
Image receiving unit is used to receive the coloured image and the depth image of shooting, and wherein, coloured image comprises the side direction coloured image at forward coloured image and at least two other visual angles of the width of cloth, and depth image is the forward depth image;
The 3D template provides the unit, is used to provide to the object of particular types and predefined 3D object template, and wherein, said 3D object template has shape facility and textural characteristics;
The object modeling unit; Be used for the 3D object template that provides the unit to provide by the 3D template being carried out deformation process based on the coloured image and the depth image that receive by image receiving unit; And the 3D object template after coloured image and the deformation process is carried out texture synthesize, to generate the 3D object model.
2. equipment as claimed in claim 1, wherein, said object modeling unit comprises:
Characteristic extracting module is used for extracting the characteristic information that is used for confirming object shapes from coloured image;
The prospect separation module is used for isolating interested foreground image through producing the foreground mask image from forward coloured image and forward depth image;
The SUPERRESOLUTION PROCESSING FOR ACOUSTIC module is used to carry out SUPERRESOLUTION PROCESSING FOR ACOUSTIC to produce the dense depth corresponding to the forward coloured image;
The deformation process module is used for based on the characteristic information that is extracted by characteristic extracting module and by the dense depth that the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module produces the 3D object template that provides the unit to provide by the 3D template being carried out deformation process;
The texture generation module is used for that the 3D object template after coloured image and the deformation process is carried out texture and synthesizes, to generate the 3D object model.
3. equipment as claimed in claim 2, wherein, the prospect separation module carries out foreground segmentation to obtain foreground mask to the forward depth image; Then; Utilize known calibrating parameters, will with the corresponding degree of depth pixel projection of foreground mask to the forward color image planes, gather on the forward coloured image, to form sparse 2D point; Fill in gap between the said sparse 2D point, on the forward coloured image, to form corresponding foreground image.
4. equipment as claimed in claim 3, wherein, the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module with the pixel projection in the depth image to the forward color image planes; To obtain the seed points that has depth value in the forward coloured image pixel; Carry out local interpolation arithmetic based on seed points of obtaining and forward coloured image, thereby produce dense depth, said dense depth is quantized corresponding to the forward coloured image; And the dense depth execution global optimization to quantizing, to improve its flatness.
5. equipment as claimed in claim 4, wherein, when the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module was carried out local interpolation arithmetic, it utilized following equality to confirm the depth value of the interpolation pixel in the forward coloured image:
d p = Σ q ∈ N ( p ) W pq × d q / Σ q ∈ N ( p ) W pq
W Pq=exp ((1-α (Δ Ω)) * color_diff-spatial_diff), wherein,
α(ΔΩ)=1/(1+e -ε×(ΔΩ-τ))
In above-mentioned equality, p representes the arbitrary interpolation pixel in the forward coloured image, the neighborhood of N (p) remarked pixel p, and q representes the arbitrary seed points among the neighborhood N (p), d pThe depth value of remarked pixel p, d qThe depth value of expression seed points q, W PqExpression seed points q is with respect to the degree of depth weight of pixel p; Δ Ω representes among the neighborhood N (p) poor between the maximum depth value and minimum depth value; Color_diff and spatial_diff represent colored approximation and the spatial neighbor degree from seed points q to the interpolation pixel p respectively, and ε and τ are respectively the positive constants definite through experiment.
6. equipment as claimed in claim 5; Wherein, The SUPERRESOLUTION PROCESSING FOR ACOUSTIC module when the discrete dense depth that obtains is carried out global optimization, the dense depth that is optimized through the set of finding objective function E below making to obtain the pixel p of minimum value:
E=∑ p(E data+E smooth)
Wherein, E DataBe the data cost, indication is about the conforming measured value between ultimate depth figure and the input depth map; E SmoothBe level and smooth cost, indication is about the conforming measured value of the degree of depth in the neighborhood.
7. equipment as claimed in claim 6, wherein, said deformation process module is carried out global change, forward distortion, the renewal of forward border, degree of depth interpolation, point renewal and 3D interpolation operation to said 3D object template successively when carrying out deformation process.
8. equipment as claimed in claim 7; Wherein, When carrying out global change, the deformation process module is carried out global change based on 3D object template that is provided the unit to provide by the 3D template and the corresponding relation of dense depth between some key point that produced by the SUPERRESOLUTION PROCESSING FOR ACOUSTIC module to said 3D object template; So that the dense depth approximate match of itself and said optimization
When carrying out positive-going transition, the deformation process module is carried out the computing of 2D data interpolating based on the 3D object template of the characteristic information that is extracted from the forward coloured image by characteristic extracting module after to global change,
Carrying out the forward border when upgrading, the deformation process module is upgraded the forward border the 3D object template after the positive-going transition based on the characteristic information that is extracted from the side direction coloured image by characteristic extracting module.
9. equipment as claimed in claim 8; Wherein, The texture generation module utilizes color conversion mechanism to carry out colored calibration to the side direction coloured image with through the 3D object template after the deformation process; So that they are consistent with the forward coloured image on tone, are mapped to the texture coordinate plane with the forward coloured image and through the side direction coloured image after the colored calibration, and the texture of side direction coloured image and the texture of forward coloured image are merged; The result that will merge again merges with the texture of the 3D object template of calibrating through colour, thereby exports final 3D object model.
10. three dimensional computer graphics disposal system comprises:
Any one described equipment that is used for the 3D object modeling as in the claim 1 to 9 is used to produce the 3D object model;
The mould processing unit is used for the 3D object model that produces is carried out the figure processed,
Rendering unit is used for the result of processed is played up, to export synthetic graphic result.
11. a method that is used for the 3D object modeling, said method comprises:
Receive coloured image and the depth image taken, wherein, coloured image comprises the side direction coloured image at forward coloured image and at least two other visual angles of the width of cloth, and depth image is the forward depth image;
Receive to the object of particular types and predefined 3D object template, wherein, said 3D object template has shape facility and textural characteristics;
Based on coloured image and depth image the 3D object template that receives is carried out deformation process, and the 3D object template after coloured image and the deformation process is carried out texture synthesize, to generate the 3D object model.
12. method as claimed in claim 11 wherein, is carried out deformation process based on coloured image and depth image to the 3D object template that receives, and the 3D object template after coloured image and the deformation process is carried out the step that texture synthesizes comprises:
Extract the characteristic information that is used for confirming object shapes from coloured image;
Isolate interested foreground image through producing the foreground mask image from forward coloured image and forward depth image;
Carry out SUPERRESOLUTION PROCESSING FOR ACOUSTIC to produce dense depth corresponding to the forward coloured image;
Based on the characteristic information that extracts and the dense depth of generation the 3D object template is carried out deformation process;
3D object template after coloured image and the deformation process is carried out texture synthesize, to generate the 3D object model.
CN2010106019130A 2010-12-10 2010-12-10 Equipment and method for modeling three-dimensional (3D) object Pending CN102542601A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010106019130A CN102542601A (en) 2010-12-10 2010-12-10 Equipment and method for modeling three-dimensional (3D) object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010106019130A CN102542601A (en) 2010-12-10 2010-12-10 Equipment and method for modeling three-dimensional (3D) object

Publications (1)

Publication Number Publication Date
CN102542601A true CN102542601A (en) 2012-07-04

Family

ID=46349412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010106019130A Pending CN102542601A (en) 2010-12-10 2010-12-10 Equipment and method for modeling three-dimensional (3D) object

Country Status (1)

Country Link
CN (1) CN102542601A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842119A (en) * 2012-08-18 2012-12-26 湖南大学 Quick document image super-resolution method based on image matting and edge enhancement
CN103971408A (en) * 2014-05-21 2014-08-06 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional facial model generating system and method
CN104333747A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Stereoscopic photographing method and stereoscopic photographing equipment
CN106105192A (en) * 2014-01-03 2016-11-09 英特尔公司 Rebuild by the real-time 3D of depth camera
CN106373185A (en) * 2016-08-29 2017-02-01 刘建国 Multi-perspective three-dimensional reconstruction method and device of removable historical relic
CN106462963A (en) * 2014-02-27 2017-02-22 因派克医药系统有限公司 System and method for auto-contouring in adaptive radiotherapy
CN104266587B (en) * 2014-09-22 2017-04-12 电子科技大学 Three-dimensional measurement system and method for obtaining actual 3D texture point cloud data
CN106575158A (en) * 2014-09-08 2017-04-19 英特尔公司 Environmentally mapped virtualization mechanism
CN106683134A (en) * 2017-01-25 2017-05-17 触景无限科技(北京)有限公司 Table lamp image calibration method and device
CN107067468A (en) * 2017-03-30 2017-08-18 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107240110A (en) * 2017-06-05 2017-10-10 张洋 Projection mapping region automatic identifying method based on machine vision technique
CN107256207A (en) * 2012-09-26 2017-10-17 Sk 普兰尼特有限公司 Apparatus and method for generating 3D object
CN107465911A (en) * 2016-06-01 2017-12-12 东南大学 A kind of extraction of depth information method and device
CN110248085A (en) * 2018-03-06 2019-09-17 索尼公司 For the stabilized device and method of object bounds in the image of image sequence
CN110619200A (en) * 2018-06-19 2019-12-27 Oppo广东移动通信有限公司 Verification system and electronic device
CN111340949A (en) * 2020-05-21 2020-06-26 超参数科技(深圳)有限公司 Modeling method, computer device and storage medium for 3D virtual environment
WO2020173258A1 (en) * 2019-02-28 2020-09-03 清华大学 Image recognition system and method
CN112784621A (en) * 2019-10-22 2021-05-11 华为技术有限公司 Image display method and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032742A (en) * 2000-07-13 2002-01-31 Sony Corp System and method for three-dimensional image generation and program providing medium
CN1430183A (en) * 2001-11-27 2003-07-16 三星电子株式会社 Node structure for representing three-D object by depth image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032742A (en) * 2000-07-13 2002-01-31 Sony Corp System and method for three-dimensional image generation and program providing medium
CN1430183A (en) * 2001-11-27 2003-07-16 三星电子株式会社 Node structure for representing three-D object by depth image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DEREK CHAN等: "A Noise-Aware Filter for Real-Time Depth Upsampling", 《WORKSHOP ON MULTI-CAMERA AND MULTI-MODAL SENSOR FUSION ALGORITHMS AND APPLICATIONS》 *
XIAOWEN WANG等: "Morphable Face Reconstruction with Multiple Views", 《2010 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC)》 *
王成章等: "改进的基于形变模型的三维人脸建模方法", 《自动化学报》 *
黄福等: "基于多角度照片的真实感三维人脸建模的研究", 《电子测试》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842119A (en) * 2012-08-18 2012-12-26 湖南大学 Quick document image super-resolution method based on image matting and edge enhancement
CN107256207A (en) * 2012-09-26 2017-10-17 Sk 普兰尼特有限公司 Apparatus and method for generating 3D object
CN107256207B (en) * 2012-09-26 2020-09-29 Sk 普兰尼特有限公司 Apparatus and method for generating 3D object
US10178366B2 (en) 2014-01-03 2019-01-08 Intel Corporation Real-time 3D reconstruction with a depth camera
CN106105192A (en) * 2014-01-03 2016-11-09 英特尔公司 Rebuild by the real-time 3D of depth camera
CN106105192B (en) * 2014-01-03 2021-05-18 英特尔公司 Real-time 3D reconstruction by depth camera
CN106462963A (en) * 2014-02-27 2017-02-22 因派克医药系统有限公司 System and method for auto-contouring in adaptive radiotherapy
CN106462963B (en) * 2014-02-27 2019-05-03 医科达有限公司 System and method for being sketched outline automatically in adaptive radiation therapy
CN103971408B (en) * 2014-05-21 2017-05-03 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional facial model generating system and method
CN103971408A (en) * 2014-05-21 2014-08-06 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional facial model generating system and method
CN106575158A (en) * 2014-09-08 2017-04-19 英特尔公司 Environmentally mapped virtualization mechanism
CN106575158B (en) * 2014-09-08 2020-08-21 英特尔公司 Environment mapping virtualization mechanism
CN104266587B (en) * 2014-09-22 2017-04-12 电子科技大学 Three-dimensional measurement system and method for obtaining actual 3D texture point cloud data
CN104333747B (en) * 2014-11-28 2017-01-18 广东欧珀移动通信有限公司 Stereoscopic photographing method and stereoscopic photographing equipment
CN104333747A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Stereoscopic photographing method and stereoscopic photographing equipment
CN107465911A (en) * 2016-06-01 2017-12-12 东南大学 A kind of extraction of depth information method and device
CN107465911B (en) * 2016-06-01 2019-03-15 东南大学 A kind of extraction of depth information method and device
CN106373185A (en) * 2016-08-29 2017-02-01 刘建国 Multi-perspective three-dimensional reconstruction method and device of removable historical relic
CN106683134A (en) * 2017-01-25 2017-05-17 触景无限科技(北京)有限公司 Table lamp image calibration method and device
CN106683134B (en) * 2017-01-25 2019-12-31 触景无限科技(北京)有限公司 Image calibration method and device for desk lamp
CN107067468B (en) * 2017-03-30 2020-05-26 联想(北京)有限公司 Information processing method and electronic equipment
CN107067468A (en) * 2017-03-30 2017-08-18 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107240110A (en) * 2017-06-05 2017-10-10 张洋 Projection mapping region automatic identifying method based on machine vision technique
CN110248085A (en) * 2018-03-06 2019-09-17 索尼公司 For the stabilized device and method of object bounds in the image of image sequence
CN110248085B (en) * 2018-03-06 2021-08-06 索尼公司 Apparatus and method for object boundary stabilization in images of an image sequence
CN110619200A (en) * 2018-06-19 2019-12-27 Oppo广东移动通信有限公司 Verification system and electronic device
WO2020173258A1 (en) * 2019-02-28 2020-09-03 清华大学 Image recognition system and method
CN112784621B (en) * 2019-10-22 2024-06-18 华为技术有限公司 Image display method and device
CN112784621A (en) * 2019-10-22 2021-05-11 华为技术有限公司 Image display method and apparatus
CN111340949A (en) * 2020-05-21 2020-06-26 超参数科技(深圳)有限公司 Modeling method, computer device and storage medium for 3D virtual environment
CN111340949B (en) * 2020-05-21 2020-09-18 超参数科技(深圳)有限公司 Modeling method, computer device and storage medium for 3D virtual environment

Similar Documents

Publication Publication Date Title
CN102542601A (en) Equipment and method for modeling three-dimensional (3D) object
CN105701857B (en) Texturing of 3D modeled objects
CN102592275B (en) Virtual viewpoint rendering method
EP1703470B1 (en) Depth image-based modeling method and apparatus
US8217931B2 (en) System and method for processing video images
KR100914845B1 (en) Method and apparatus for 3d reconstructing of object by using multi-view image information
US20130162643A1 (en) Physical Three-Dimensional Model Generation Apparatus
US20080259073A1 (en) System and method for processing video images
US20120032948A1 (en) System and method for processing video images for camera recreation
US20150178988A1 (en) Method and a system for generating a realistic 3d reconstruction model for an object or being
CN103345771A (en) Efficient image rendering method based on modeling
Quan Surface reconstruction by integrating 3d and 2d data of multiple views
CN113284173B (en) End-to-end scene flow and pose joint learning method based on false laser radar
CN112927348B (en) High-resolution human body three-dimensional reconstruction method based on multi-viewpoint RGBD camera
KR102422822B1 (en) Apparatus and method for synthesizing 3d face image using competitive learning
Tzovaras et al. 3D object articulation and motion estimation in model-based stereoscopic videoconference image sequence analysis and coding
CN103945206A (en) Three-dimensional picture synthesis system based on comparison between similar frames
CN114935316B (en) Standard depth image generation method based on optical tracking and monocular vision
Oisel et al. One-dimensional dense disparity estimation for three-dimensional reconstruction
Zhou et al. Live4D: A Real-time Capture System for Streamable Volumetric Video
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
Tan et al. FaceCollage: A rapidly deployable system for real-time head reconstruction for on-the-go 3D telepresence
Raviya et al. Real time depth data refurbishment in frequency domain and 3D modeling map using Microsoft kinect sensor
JP2009294956A (en) Three-dimensional shape restoration method, its device, program, and recording medium
Chang et al. Facial model adaptation from a monocular image sequence using a textured polygonal model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20170329

AD01 Patent right deemed abandoned