CN103530907B - Complicated three-dimensional model drawing method based on images - Google Patents

Complicated three-dimensional model drawing method based on images Download PDF

Info

Publication number
CN103530907B
CN103530907B CN201310497271.8A CN201310497271A CN103530907B CN 103530907 B CN103530907 B CN 103530907B CN 201310497271 A CN201310497271 A CN 201310497271A CN 103530907 B CN103530907 B CN 103530907B
Authority
CN
China
Prior art keywords
virtual perspective
pixel
image
point
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310497271.8A
Other languages
Chinese (zh)
Other versions
CN103530907A (en
Inventor
向开兵
郝爱民
吴伟和
李帅
王德志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN ESUN DISPLAY CO Ltd
Beihang University
Original Assignee
SHENZHEN ESUN DISPLAY CO Ltd
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN ESUN DISPLAY CO Ltd, Beihang University filed Critical SHENZHEN ESUN DISPLAY CO Ltd
Priority to CN201310497271.8A priority Critical patent/CN103530907B/en
Publication of CN103530907A publication Critical patent/CN103530907A/en
Application granted granted Critical
Publication of CN103530907B publication Critical patent/CN103530907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a complicated three-dimensional model drawing method based on images. Vertexes are uniformity selected from a spherical surface surrounding a model as camera coordinate positions, and a color image and a depth image of the model under a sampling visual angle are obtained by taking a sphere center as a target point of a camera. The method comprises the steps of triangularly dividing the spherical surface according to sampling point coordinates to determine a triangle where a virtual visual angle is located, taking corresponding visual angles as reference visual angles when taking vertexes of the triangle as sampling points, and drawing the model under the virtual visual angle with depth images and color images under the reference visual angles: calculating mapping relationships between the virtual visual angle and pixels in the three reference visual angles respectively with parameters of the reference visual angles, and drawing an image of the virtual visual angle with the appropriate pixels of the reference visual angles or a background pixel by taking the depth images as references, and finally, optimizing the drawn color image. The method can meet real-time requirements and obtains a very true-to-life drawing effect.

Description

Complex three-dimensional modeling rendering method based on image
Technical field
The present invention relates to a kind of Realistic Rendering method based on image, it is mainly used in complex model under virtual perspective Realistic Rendering.
Background technology
In the middle of traditional graphics, the general process of Realistic Rendering is: the geometrical property of user input object simultaneously carries out several What model, then according to model environment Lighting information, the physics such as the smoothness of model, transparency, reflectance refractive index Attribute, and superficial makings etc., by spatial alternation, perspective transform etc., it is calculated object each picture under certain viewing angles The color value of element.However, the modeling process of this method is complicated, calculates and the expense of display is big, time complexity is multiple with model Miscellaneous degree coupling is high, is not suitable for the drafting of complex model, and is difficult to obtain drawing result true to nature.
Image based rendering technology (image-based rendering.ibr) using image as basic input, Do not carry out the image under synthesis virtual perspective on the basis of Geometric model reconstruction, video-game, fictitious tour, ecommerce, There is boundless application prospect in the fields such as industrial detection, are also therefore a research heat of 3-D graphic field of drawing true to nature Point.Proposed by the invention is a kind of ibr method based on the complex three-dimensional modeling rendering method of image.
The main method of Content_based image is as follows:
1. geometry and image blend modeling and method for drafting (hybrid geometry and image-based approach)
Paule.debevec (list of references 1-paule.debevec, camilloj.taylor, jitendra make. “modeling and rendering architecture from photographs:a hybrid geometry and imaged-based approach”.in siggraph96processing of the23rdannual conference on Computer graphics and interactive techniques.page11-20) geometry that proposes and image blend build The key step of mould and method for drafting is as follows:
A. photographed scene photo, the edge of interaction designated model;
B. the rough model of generation model;
C. utilize the stereoscopic vision algorithm refined model based on model;
D. synthesize new view using the texture mapping based on viewpoint.
The example of this process is as shown in Figure 4.The advantage of this method is simple and fast, can be by shooting a small amount of photo The new view obtaining;This process of shortcoming also needs to the profile of artificial designated model, may be only available for the profiles such as common building thing Regular scenery, is not suitable for complex model.
2. view interpolation, the method (view interpolation, transaction) of conversion
View interpolation, alternative approach directly utilize the photo of reference point to generate the image under virtual perspective.View interpolation, change Change method (list of references 2-geetha ramachandran, markus rupp. " multiview synthesis from Stereo views " .in iwssip2012,11-13april2012, pp.341-pp.345. list of references 3-ankit k.jain,lam c.tran,ramsinkhoshabeh,truong q.nguyen,efficient stereo-to- Multiview synthesis, icassp2011.pp.889-pp.892. list of references 4-s.c.chan, a.c.bovik, h.r.sheikh,and e.p.simomcelli,“image-based rendering and synthesis”,ieee Signal process, mag, vol.24, no.6, pp.22-pp.33) input be the Regularization under two visual angles cromogram As and depth image, being under the virtual perspective of (baseline, baseline) on the straight line that two reference points are determined of output Image, specific process is as follows:
A. Stereo matching, generates initial synthesis multi-view image;
B. optimization processing, finds possible empty point according to rim detection;
C. filling generates the depth map corresponding to synthesis visual angle;
D. image reconstruction, according to depth image filling color image cavity.
The example of the method is as shown in Figure 5.The advantage of this method is that process is simple, and the noise peakedness ratio generating image is high (noise peakedness ratio (psnr), the image after sign process and the similarity of before processing image.Signal peak ratio is higher, illustrates to close Become image validity stronger), and can be outstanding filling cavity.But the method can only produce virtual on baseline The image at visual angle, and Regularization operation is carried out to picture and can introduce projection error, can be only generated approximate intermediate image.
Content of the invention
Existing threedimensional model Realistic Rendering method have the disadvantage in that based on the method for geometry exist model obtain with Process of reconstruction is complicated, and drawing process is subject to model complexity, model illumination properties affect big, and drafting effect sense of reality is not strong, uncomfortable Drafting for complex model;Image based rendering technology, synthesis visual angle is limited in the baseline between two reference viewing angle Above it is impossible to generate the image under object meaning in office visual angle.
For the shortcoming of prior art, the present invention proposes the complex three-dimensional modeling rendering method based on image, its comprise as Lower process:
(1) demarcation of virtual perspective: triangle division is carried out according to the sphere that sampled point camera position coordinate pair surrounds model, Determine the tri patch that virtual perspective is located, the visual angle corresponding to three summits taking this tri patch is reference viewing angle, virtual Visual angle can be expressed as the linear combination of reference viewing angle;
(2) calculate and draw: according to virtual perspective, the position of reference viewing angle and camera parameter, calculate three reference viewing angle The mapping relations between pixel coordinate under the coordinate of each pixel and virtual perspective in image;According to mapping relations, by reference In the image under virtual perspective for each of coloured image pixel-map under visual angle, calculate this pixel in virtual perspective figure below Coordinate in picture and depth value, during for having the pixel-map of multiple reference viewing angle to same position, take depth value little Pixel value;All pixels being referenced visual angle pixel filling in labelling virtual perspective hypograph simultaneously, construction one width reflection from Reference viewing angle maps the gray-scale maps of situation to virtual perspective;
(3) optimization of image: for the cavity in virtual perspective hypograph, that is, pass through (2) and calculate and reflect without reference to image It is mapped to the pixel of this position, map the gray-scale maps of situation from reflection reference viewing angle to virtual perspective and extract edge contour information, Along edge contour, medium filtering is carried out to the coloured image generating, with the value filling cavity of neighbor pixel;Meanwhile, by intermediate value Filtering filters out noise pixel.
Wherein, the demarcation to virtual perspective is realized by step (1), determines the reference viewing angle drawn for virtual perspective, Pixel under the reference viewing angle mapping relations between pixel under virtual perspective are set up by step (2), realizes to virtual perspective Under complex model draw.
Wherein, step (2) with step (3) use cuda (compute unified device Architecture, unifiedly calculates equipment framework) parallel computation, accelerate the drafting under virtual perspective and optimal speed, reach reality When interaction requirement.
The principle of the present invention is:
The present invention provides a kind of complex three-dimensional modeling rendering method based on image, uniformly selects on the sphere surrounding model Select summit as camera coordinates position, the impact point with the centre of sphere as camera, obtain the coloured image that model here is sampled under visual angle With depth image.Triangle division is carried out to sphere according to sample point coordinate, determines the triangle that virtual perspective is located, take with triangle The summit of shape is the visual angle corresponding to sampled point is reference viewing angle, is drawn with coloured image using the depth image under reference viewing angle Model under virtual perspective: first, calculate pixel in virtual perspective and three reference viewing angle respectively using the parameter of reference viewing angle Between mapping relations;Secondly, with depth image as reference, suitable reference viewing angle pixel or background pixel is selected to draw void Intend the image at visual angle;Finally, coloured image drafting being obtained is optimized.Employ cuda in whole drawing process to add Speed is it is achieved that quickly processing parallel to image.The present invention disclosure satisfy that the requirement of real-time, and obtain being really true to life paints simultaneously Effect processed.
Compared with prior art, advantage is as follows for the present invention:
(1) virtual perspective of the present invention arbitrarily can move in the middle of space.Using three reference point virtual perspective Under model drawn, so both can ensure that virtual perspective can in the cofree movement of level and vertical two dimensions, Reduce input quantity and the storage overhead of algorithm to greatest extent again;
(2) method for drafting proposed by the present invention is highly stable, the complexity coupling of the time complexity of algorithm and scene Low, it is particularly suited for the drafting of complex model.
Brief description
Fig. 1 is flow chart of the present invention;
Fig. 2 is the principle schematic of this algorithm;
Fig. 3 when the inside in certain tri patch for the virtual perspective, the position relationship at virtual perspective and reference point visual angle;
Fig. 4 direct transform matrix and inverse-transform matrix;
Fig. 5 is geometry and image blend modeling process, wherein: (a) is to determine, by interaction, the profile being reconstructed object, B () is the rough model obtaining object, (c) is with the stereoscopic vision algorithm refined model based on model, and (d) is by based on regarding The texture mapping of point obtains final scene;
Fig. 6 is the example of parallax drawing method, and initial input picture is that (in figure is this cromogram for the coloured image of Regularization The schematic diagram of the gray-scale maps of picture), being initially generated figure is to have obtained cromogram after pixel-map and filling process (in figure is this The schematic diagram of the gray-scale maps of cromogram), note the cavity in square frame, the final output of algorithm be hole-filling after cromogram (in figure is the schematic diagram of the gray-scale maps of this cromogram), notices that the cavity in square frame has been eliminated;
(in figure is the signal of the gray-scale maps of this cromogram to the cromogram of virtual perspective after preliminary mapping filling for the Fig. 7 Figure), the cavity of the inside of the noise of attention model periphery;
Gray-scale maps, the noise of attention model periphery and the internal sky of virtual perspective after preliminary mapping filling for the Fig. 8 Hole;
Fig. 9 entered expansion, the edge graph after difference operation;
Figure 10 simply carries out the result after medium filtering to all pixels;
Figure 11 result edge pixel being carried out after medium filtering, compared with Figure 10, Figure 11 preferably saves carefully Section information, truly higher;
Figure 12 is the part input picture of invention example, and top three pictures are input picture schematic diagrams, and input picture can Think coloured image, in figure does not show, lower section three pictures are the corresponding depth maps of difference;
Figure 13 is the section outputs image of invention example, and the picture of output is by method for drafting synthesis proposed by the invention 's.
Specific embodiment
Below in conjunction with the accompanying drawings and specific embodiment further illustrates the present invention.
1. a kind of Realistic Rendering method based on depth image:
For by being evenly distributed on the sphere surrounding model, comprise camera perspective parameter, the depth map under camera perspective, The sampled point of cromogram, come the threedimensional model to represent, carries out triangle division according to sample point coordinate to sphere, determines virtual perspective The triangle being located, takes the visual angle corresponding to vertex of a triangle as sampled point as reference viewing angle, using under reference viewing angle Depth image draws the model under virtual perspective with coloured image: first, the parameter using reference viewing angle calculates virtual regarding respectively Mapping relations between pixel in angle and three reference viewing angle;Secondly, with depth image as reference, select suitable reference viewing angle Pixel or the image of background pixel drafting virtual perspective;Finally, coloured image drafting being obtained is optimized.Entirely painting Employ cuda during system to accelerate it is achieved that quickly processing parallel to image.Specific implementation process is as follows:
1. the demarcating module of virtual perspective
In the present invention, object is represented with the depth image of different sampled points, coloured image and sampled point camera parameter Threedimensional model.Threedimensional model m represents, wherein k is a simplicial complex, illustrates the connection of sampled point with two tuple<k, v> Relation;V represents the set of sampled point, v=(vi| i=1,2,3... | v | |), | v | represents the number of sampled point;vi=(ci,di, pi) represent ith sample point, ciAnd diRepresent ith sample point coloured image and depth image, p respectivelyiIllustrate i-th to adopt The camera parameter of sampling point, pi=(pci,poi,aspi,fovi,zni,zfi), pciRepresent camera position, poiRepresent camera subject position Put, aspiRepresent the aspect ratio of camera fields of view, foviRepresent the range in the visual field of camera, zni、zfiRepresent that camera is effectively deep respectively The minima of degree and maximum.
Before model under drawing virtual perspective, need to obtain three nearest with virtual perspective sampled point, that is, carry out The demarcation of virtual perspective.Because all of sampled point is equally distributed on the sphere surrounding object, in sphere according to sampling After point coordinates carries out triangle division, it is only necessary to determine the tri patch that virtual perspective is located, then with three summits of this tri patch It is exactly required nearest sampled point, these three nearest sampled points are referred to as the reference point of virtual perspective.
< v1,v2,v3>=f (v) (1)
Wherein, v is virtual perspective, < v1,v2,v3> is the tri patch that virtual perspective is located, v1,v2,v3It is the reference of v Point.In the present invention, the many of object sphere are surrounded by solving the vector pointing to the centre of sphere from virtual perspective camera position with approaching The intersection point of face body, determines three nearest reference points according to the tri patch that intersection point is located.
Virtual perspective and reference point have position relationship as shown in Figure 3.From the knowledge of analytical geometry, shown in Fig. 3 Position relationship under, virtual perspective can by with reference to point Linear synthesis, and meet:
v &rightarrow; = &alpha; v &rightarrow; 1 + &beta; v &rightarrow; 2 + ( 1 - &alpha; - &beta; ) v &rightarrow; 3 0 &le; &alpha; &le; 1 0 &le; &beta; &le; 1 0 &le; 1 - &alpha; - &beta; &le; 1 - - - ( 2 )
Wherein,Represent respectively and point to the centre of sphere by virtual perspective coordinate points and three with reference to point coordinates Vector.
From the figure 3, it may be seen that the coordinate of three reference points and the centre of sphere constitute tetrahedral structure, the coordinate of virtual perspective is located at reference In the tri patch that point coordinates is surrounded.Order vectorTetrahedron volume can be expressed as not The form of the mixed product on three sides in approximately the same plane:
u = 1 6 v &rightarrow; &times; e 1 &rightarrow; &centerdot; e 2 &rightarrow; = - 1 6 v 1 &rightarrow; &centerdot; v 2 &rightarrow; &centerdot; e 2 &rightarrow; - - - ( 3 )
Formula 2 deforms:
v 1 &rightarrow; = v &rightarrow; - &beta; v 2 &rightarrow; - ( 1 - &alpha; - &beta; ) v 3 &rightarrow; &alpha; - - - ( 4 )
The latter half that formula (4) is brought into formula (3) obtains equation below:
&alpha; = - v &rightarrow; &centerdot; v 2 &rightarrow; &times; e 2 &rightarrow; v &rightarrow; &times; e 1 &rightarrow; &centerdot; e 2 &rightarrow; - - - ( 5 )
In the same manner.Can obtain:
&beta; = - v &rightarrow; &centerdot; v 1 &rightarrow; &times; e 2 &rightarrow; v &rightarrow; &times; e 1 &rightarrow; &centerdot; e 2 &rightarrow; - - - ( 6 )
In sum it is only necessary to be traveled through in the middle of all of dough sheet, obtain each triangle using formula (5), (6) α, β in dough sheet, if α, β meet the constraints in formula (2), virtual perspective just among the encirclement of this dough sheet, and this Virtual perspective can be according to formula (2) by reference point linear expression.
2. calculate and drafting module
This module comprises two subprocess, calculating process and drawing process.Calculating process determines under each reference point visual angle Pixel to pixel under virtual perspective mapping relations;Drawing process is directed to each pixel under virtual perspective, according to asking Pixel under 1 to the 3 reference point visual angles of mapping relations selection obtaining or background pixel are drawn.
2.1 calculating process:
Give the pixel coordinate of depth image under a certain reference point visual angle and pixel value, in conjunction with the camera ginseng of reference point Number, can obtain point corresponding to this pixel coordinate in three-dimensional world coordinate system, and this process is reversible.Arbitrarily join Under examination point visual angle there is dijection with the coordinate in world coordinate system of three-dimensional body in the pixel coordinate of depth image and pixel value Relation:
pixel &rightarrow; = m &centerdot; object &rightarrow; object &rightarrow; = m - 1 &centerdot; pixel &rightarrow; pixel &rightarrow; = ( i , j , l , depth ) object &rightarrow; = ( x , y , z , depth ) - - - ( 7 )
Wherein i, j are pixel coordinates, and x, y, z are coordinates in world coordinate system, and depth is the pixel value of depth image.m It is invertible matrix, determined by sampled point camera parameter.It is defined in world coordinate system coordinate in the present invention to pixel coordinate Transition matrix m is forward conversion matrix, m-1For reverse transformation matrix.Forward conversion matrix and the solution procedure of reverse transformation matrix As shown in figure 4, its solution procedure is described taking forward conversion matrix as a example.
Coordinate in world coordinate system for the object first passes around camera view conversion and is converted to camera space coordinate, then passes through Perspective projection transformation be converted into pixel coordinate it may be assumed that
M=mproject mlookat (8)
Formula (7), (8) simultaneous are obtained:
pixel &rightarrow; = mproject &centerdot; mlookat &centerdot; object &rightarrow; - - - ( 9 )
object &rightarrow; = ( mproject &centerdot; mlookat ) - 1 &centerdot; pixel &rightarrow;
Wherein, mlookat is from world coordinates to camera space transformation of coordinates matrix, and this matrix is sat by the position of camera Mark pc, coordinate of ground point po and positive direction coordinate up determine, specific form is as follows:
mlookat = xaxis &rightarrow; . x yaxis &rightarrow; . x zaxis &rightarrow; . x 0 xaxis &rightarrow; . y yaxis &rightarrow; . y zaxis &rightarrow; . y 0 xaxis &rightarrow; . z yaxis &rightarrow; . y zaxis &rightarrow; . z 0 - xaxis &rightarrow; . po - yaxis &rightarrow; . po - zaxis &rightarrow; . po 1
zaxis &rightarrow; = pc - po | pc - po | xaxis &rightarrow; = up &times; zaxis &rightarrow; | up &times; zaxis &rightarrow; | yaxis &rightarrow; = zaxis &rightarrow; &times; xaxis &rightarrow; - - - ( 10 )
Mproject is perspective projection transformation matrix, and this matrix is nearest by visual angle range (fov) aspect ratio (asp) of camera Depth (zn) is determined with farthest depth (zf), and specific form is as follows:
mproject = xscale 0 0 0 0 yscale 0 0 0 0 zf zf - zn 0 0 0 - zn * zf zf - zn 1 - - - ( 11 )
yscale = cot ( fov 2 ) xscale = yscale asp
Therefore, the mapping relations of pixel to the pixel under virtual perspective under reference point visual angle are as follows:
pixel &rightarrow; = mproject &centerdot; mlookat &centerdot; ( mp roject v i &centerdot; mlook at v i ) - 1 &centerdot; pixel v i &rightarrow; - - - ( 12 )
Wherein,It is the pixel coordinate under virtual perspective and depth value,It is reference point viPixel coordinate With depth value, mlookatv、mprojectvBe from world coordinates to virtual perspective under the transformation matrix of camera coordinates and perspective Projection matrix,It is to reference point v from world coordinatesiThe transformation matrix of camera coordinates with thoroughly Depending on projection matrix.
2.2 drawing process:
Image under virtual perspective is stored in the middle of coloured image.By formula (12), pixel under virtual perspective with Under reference point visual angle, pixel may have multiple corresponding relations:
Situation 1: the pixel of virtual perspective hypograph corresponds to therewith without reference to the pixel under a visual angle, then this pixel is empty The part in hole;
Situation 2: the pixel of virtual perspective hypograph only has the pixel under 1 reference point visual angle to correspond to therewith, then directly sharp It is filled with the image under virtual perspective with the pixel under reference point visual angle;
Situation 3: some pixel of virtual perspective hypograph has the pixel under 2 to 3 reference point visual angles to correspond to therewith, then Carry out the image rendering under virtual perspective according to formula (13).Wherein, the pixel under virtual perspective is p, under three reference viewing angle Pixel p1、p2、p3Correspond to therewith, under the minimum reference point visual angle of selected depth value, pixel draws the image of virtual perspective in principle. Wherein αiIt is the weight at the reference point visual angle tried to achieve in formula 2.
p = &sigma; i = 1 q &alpha; i &alpha; p i , p i &element; q q = { q | q - p ' < threash , | p ' = min ( p 1 &centerdot; &centerdot; &centerdot; p n ) } &alpha; = &sigma; i q &alpha; i n = 2,3 - - - ( 13 )
The all pixels belonging to situation 2 and situation 3 of labelling during drawing, label information is stored in gray scale simultaneously In figure.Using cuda parallel computation during drafting, multiple pixels under simultaneously synthesizing virtual perspective, can greatly accelerate The speed drawn.
3. the optimization module of image
The coloured image of above-mentioned drawing process output contains noise and cavity, the reason empty calculating above with paint In molding block illustrate, produce noise the reason be: there is error due to during extraction of depth information, and formula (10) obtain Not necessarily integer pixel between mapping, in drawing process, noise therefore occurs, as shown in Figure 7, Figure 8.More Simple solution is that the image to virtual perspective carries out medium filtering, and do so can eliminate most of noise and sky Hole.Although this method is simple, can cause image blurring and excessively smooth, as shown in Figure 10.The present invention is first to gray scale Figure is expanded and difference operation, extracts the edge contour of image, carries out medium filtering then along edge contour, using cavity Surrounding pixel filling cavity, eliminates the noise signal of image border simultaneously.The module of image optimization comprises following subprocess:
3.1 extraction edge pixels
Expansion is the operation seeking local maxima.Expansion is that image and core are carried out convolution, that is, calculate the region that core is covered The maximum of central pixel, and this maximum is assigned to the pixel that reference point is specified;Be computed with drafting module after The coloured image generating and gray level image all contain cavity, carry out expansive working to gray level image and can eliminate cavity.After expansion Gray-scale maps and original gray-scale maps do difference operation, obtain is exactly edge graph, and in figure storage is exactly edge contour information, Cavity there is with edge contour in the middle of, as shown in Figure 9.
3.2 medium filtering
Using the method for medium filtering, empty pixel is smoothed in the present invention, medium filtering is by Filtering Template The pixel in the middle of each pixel template in the square neighborhood of center pixel is replaced.Because medium filtering can lead to figure Image distortion, the pixel therefore carrying out intermediate value replacement is limited in the middle of edge contour by strict, so can drop to greatest extent The number of times of low filtering operation, keeps detailed information.Simultaneously as mostly noise signal is isolated point in the middle of space, carry out intermediate value While filtering, most edge noise point is covered by background pixel, has reached the purpose eliminating noise.Finally filter The result arriving is as shown in figure 11.
Similar to drawing process, accelerate image manipulation by using cuda parallel computation, improve the operation speed of method for drafting Degree.
2. implementation process
Next the specific implementation process of the present invention is described taking the drawing process of beautiful Buddhist as a example.
(1) uniformly 162 sampled points are selected on the spherical outside surface surrounding beautiful Buddhist, with the centre of sphere as zero, r=430, The coordinate of ith sample point is vi=(xi,yi,zi), coordinate meets equation below:
x i = r ( 1 - 2 i - 1 n ) cos ( arcsin ( 1 - 2 i - 1 n ) n&pi; ) y i = r ( 1 - 2 i - 1 n ) sin ( arcsin ( 1 - 2 i - 1 n ) n&pi; ) z i = r cos ( arcsin ( 1 - 2 i - 1 n ) ) - - - ( 14 )
Wherein, n=162.
(2) triangle division is carried out to spherical outside surface according to sample point coordinate, the result according to triangle division is entered to model data Row packet.The present invention adopts the ball inpolyhedron with triangle as basic model for the method construct one that triangle approaches.Divide Afterwards, sphere has 162 sampled points, 320 tri patchs.
(3) sampled point parameter is set, shoots depth image and coloured image.Parameter p of sampled pointi=(pci,poi, aspi,fovi,zni,zfi), pciIt is the coordinate of sampled point, be calculated by formula (14), the setting such as following table institute of remaining parameter Show:
Table 1
poi fovi aspi zni zfi
(0,0,0) 47′ 1.333 350 550
Set after parameter shoot coloured image first, then shoot depth image.Select wherein three sampled point v1,v2, v3, wherein:
pc v 1 = ( - 182.8899,365.7798,132.8773 )
pc v 2 = ( - 220.4835,217.4600 , - 298.3256 )
pc v 3 = ( 295.9221,226.0644,215.0000 )
The coloured image being collected is as shown in figure 12 with depth image.
(4) demarcate virtual perspective, calculate mapping relations.User regards using keyboard is virtual with mouse interactive controlling in instances Angle roams in three-dimensional scenic, and initial virtual perspective is (0,430,0), and the changing value between adjacent virtual visual angle is handed over by user Mutually specify.Known current virtual visual angle, the tri patch obtaining in traversal step (2), determine void according to formula (2), (5), (6) Intend the tri patch that visual angle is located, and demarcate virtual perspective with the vector quantization that reference point points to the centre of sphere.According to virtual perspective The mapping relations of the parameter calculation formula (12) of parameter and sampled point.
(5) synthesize virtual perspective hypograph, and carry out medium filtering.
A. when the pixel in virtual perspective regards using background pixel filling is virtual to when corresponding to therewith without reference to the pixel of point Angle hypograph;
B. the pixel in virtual perspective only with a reference point pixel to corresponding when using this respective pixel filling void Intend visual angle hypograph;
C., when the pixel in virtual perspective is corresponding with the pixel of 2 to 3 reference points, set thresh=15, according to public affairs Formula (13) selects one or more reference point pixel synthesis virtual perspective hypograph.
The all pixels belonging to situation b and situation c of labelling during drawing, label information is stored in gray scale simultaneously In figure.Gray-scale maps are carried out with expansive working, and subtracts each other with original gray-scale maps, obtain the profile information of model, along model Profile carries out medium filtering, obtains the coloured image under final virtual perspective.
The output of this example is the cromogram under the virtual perspective under being controlled by user mutual, and Figure 13 is the part of this example Output result.
3. implementation result
Main explanation implementation result in terms of real-time with sense of reality two.The main configuration of computer used by test is as follows Shown in table:
Table 2
Operating system 64bit windows7 Ultimate
Processor intel coretm2q9400(4cpus)2.66ghz
Video card nvidia geforce gtx470
Internal memory 4g
2.1 real-time
On testing computer, with beautiful Buddhist as test object, carry out beautiful Buddhist modeling using 3dmax software, due to jade The optical signature of material, light scatters after injecting material inside, finally projects object and enters in the middle of the visual field, this phenomenon It is referred to as Subsurface Scattering, calculating Subsurface Scattering needs to take considerable time, uses using the drafting that 3dmax carries out a visual angle When be about 40 minutes.Using the three-dimensional model drawing method based on image, without the concern for the material of model during modeling, draw When be filled with according to pixel, the real time rate drawn on testing computer be about 25 frames/second, being capable of quick response The input of user, controls virtual perspective arbitrarily to roam in space, is fully achieved the requirement of real-time, interactive.
2.2 sense of reality
Have selected 162 sampled points in the middle of here application, and select three closest with virtual perspective sampled point For reference point, drawn in units of pixel, after tentatively completing, carried out scene optimization, repair cavity and eliminate in image Noise, the effect finally giving is as shown in figure 13.Make comparisons with being originally inputted Figure 12, there is no cavity in Figure 13 and make an uproar Acoustic image element, the Subsurface Scattering calculating requiring no complexity also can obtain drafting effect true to nature.
The part that the present invention does not elaborate belongs to technology as well known to those skilled in the art.

Claims (1)

1. the complex three-dimensional modeling rendering method based on image, its feature comprises following process:
(1) demarcation of virtual perspective: triangle division is carried out according to the sphere that sampled point camera position coordinate pair surrounds model, determines The tri patch that virtual perspective is located, the visual angle corresponding to three summits taking this tri patch is reference viewing angle, virtual perspective The linear combination of reference viewing angle can be expressed as;
Represent the threedimensional model of object, three-dimensional mould with the depth image of different sampled points, coloured image and sampled point camera parameter Type m represents, wherein k is a simplicial complex, illustrates the annexation of sampled point with two tuple<k, v>;V represents sampled point Set, v=(vi| i=1,2,3... | v | |), | v | represents the number of sampled point;vi=(ci,di,pi) represent ith sample Point, ciAnd diRepresent ith sample point coloured image and depth image, p respectivelyiIllustrate the camera parameter of ith sample point, pi=(pci,poi,aspi,fovi,zni,zfi), pciRepresent camera position, poiRepresent camera target, aspiRepresent camera The aspect ratio in the visual field, foviRepresent the range in the visual field of camera, zni、zfiRespectively represent camera effective depth minima with Big value;
Before model under drawing virtual perspective, need to obtain three nearest with virtual perspective sampled point, that is, carry out virtual The demarcation at visual angle, because all of sampled point is equally distributed on the sphere surrounding object, sits according to sampled point in sphere After mark carries out triangle division, it is only necessary to determine the tri patch that virtual perspective is located, with three summits of this tri patch it is exactly then Required nearest sampled point, these three nearest sampled points are referred to as the reference point of virtual perspective,
<v1,v2,v3>=f (v) (1)
Wherein, v is virtual perspective, < v1,v2,v3> it is the tri patch that virtual perspective is located, v1,v2,v3It is the reference point of v, pass through Solve and point to the vector of the centre of sphere and the polyhedral intersection point approaching encirclement object sphere from virtual perspective camera position, according to intersection point The tri patch being located determines three nearest reference points;
Virtual perspective can be synthesized by with reference to point Linear, and meets:
Wherein,Represent respectively by virtual perspective coordinate points and three vectors pointing to the centre of sphere with reference to point coordinates;
The coordinate of three reference points and the centre of sphere constitute tetrahedral structure, and the coordinate of virtual perspective is surrounded positioned at reference point coordinate In tri patch, order vectorTetrahedron volume can be expressed as not in approximately the same plane The mixed product on three sides form:
Formula (2) deforms:
The latter half that formula (4) is brought into formula (3) obtains equation below:
In the same manner, can obtain:
Only need to be traveled through in the middle of all of dough sheet, obtain α, β in each tri patch using formula (5), (6), such as Fruit α, β meet the constraints in formula (2), then, just among the encirclement of this dough sheet, and this virtual perspective is permissible for virtual perspective According to formula (2) by reference point linear expression;
(2) calculate and draw: according to virtual perspective, the position of reference viewing angle and camera parameter, calculate three reference viewing angle images In each pixel coordinate and virtual perspective under pixel coordinate between mapping relations;According to mapping relations, by reference viewing angle In each of lower coloured image image under virtual perspective for the pixel-map, calculate this pixel in virtual perspective hypograph Coordinate and depth value, during for having the pixel-map of multiple reference viewing angle to same position, take the pixel that depth value is little Value;All pixels being referenced visual angle pixel filling in labelling virtual perspective hypograph simultaneously, construction one width reflects from reference Visual angle maps the gray-scale maps of situation to virtual perspective;
(3) optimization of image: for the cavity in virtual perspective hypograph, that is, pass through (2) calculating and be mapped to without reference to image The pixel of this position, maps the gray-scale maps of situation from reflection reference viewing angle to virtual perspective and extracts edge contour information, along Edge contour carries out medium filtering to the coloured image generating, with the value filling cavity of neighbor pixel;Meanwhile, by medium filtering Filter out noise pixel;
Demarcation to virtual perspective is realized by step (1), determines the reference viewing angle drawn for virtual perspective, by step (2) set up pixel under the reference viewing angle mapping relations between pixel under virtual perspective, realize to the complexity under virtual perspective Modeling rendering;
Use cuda parallel computation in step (2) with step (3), accelerate the drafting under virtual perspective and optimal speed, reach The requirement of real-time, interactive;
Should be drawn using the model under three reference point virtual perspective based on the complex three-dimensional modeling rendering method of image, So both can ensure that virtual perspective in level cofree movement with vertical two dimensions, and can reduce calculation to greatest extent The input quantity of method and storage overhead;
The complex three-dimensional modeling rendering method being somebody's turn to do based on image is highly stable, the complexity coupling of the time complexity of algorithm and scene Conjunction property is low, is particularly suited for the drafting of complex model.
CN201310497271.8A 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images Active CN103530907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310497271.8A CN103530907B (en) 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310497271.8A CN103530907B (en) 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images

Publications (2)

Publication Number Publication Date
CN103530907A CN103530907A (en) 2014-01-22
CN103530907B true CN103530907B (en) 2017-02-01

Family

ID=49932885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310497271.8A Active CN103530907B (en) 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images

Country Status (1)

Country Link
CN (1) CN103530907B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331918B (en) * 2014-10-21 2017-09-29 无锡梵天信息技术股份有限公司 Based on earth's surface occlusion culling and accelerated method outside depth map real-time rendering room
CN105509671B (en) * 2015-12-01 2018-01-09 中南大学 A kind of robot tooling center points scaling method using plane reference plate
CN107169924B (en) * 2017-06-14 2020-10-09 歌尔科技有限公司 Method and system for establishing three-dimensional panoramic image
CN107464278B (en) * 2017-09-01 2020-01-24 叠境数字科技(上海)有限公司 Full-view sphere light field rendering method
CN108520342B (en) * 2018-03-23 2021-12-17 中建三局第一建设工程有限责任公司 BIM-based Internet of things platform management method and system
CN111402404B (en) * 2020-03-16 2021-03-23 北京房江湖科技有限公司 Panorama complementing method and device, computer readable storage medium and electronic equipment
CN111651055A (en) * 2020-06-09 2020-09-11 浙江商汤科技开发有限公司 City virtual sand table display method and device, computer equipment and storage medium
CN112199756A (en) * 2020-10-30 2021-01-08 久瓴(江苏)数字智能科技有限公司 Method and device for automatically determining distance between straight lines
CN114543816B (en) * 2022-04-25 2022-07-12 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things
CN115272523B (en) * 2022-09-22 2022-12-09 中科三清科技有限公司 Method and device for drawing air quality distribution map, electronic equipment and storage medium
CN116502371B (en) * 2023-06-25 2023-09-08 厦门蒙友互联软件有限公司 Ship-shaped diamond cutting model generation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697062B1 (en) * 1999-08-06 2004-02-24 Microsoft Corporation Reflection space image based rendering
CN102945565A (en) * 2012-10-18 2013-02-27 深圳大学 Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103116897A (en) * 2013-01-22 2013-05-22 北京航空航天大学 Three-dimensional dynamic data compression and smoothing method based on image space

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060017720A1 (en) * 2004-07-15 2006-01-26 Li You F System and method for 3D measurement and surface reconstruction
US8643701B2 (en) * 2009-11-18 2014-02-04 University Of Illinois At Urbana-Champaign System for executing 3D propagation for depth image-based rendering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697062B1 (en) * 1999-08-06 2004-02-24 Microsoft Corporation Reflection space image based rendering
CN102945565A (en) * 2012-10-18 2013-02-27 深圳大学 Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103116897A (en) * 2013-01-22 2013-05-22 北京航空航天大学 Three-dimensional dynamic data compression and smoothing method based on image space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
An image-based rendering(ibr) approach for realistic stereo view synthesis of tv broadcast based on structure from motion;Knorr S 等;《Image Processing IEEE International Conference on》;20090930;第6卷;VI572-VI575 *

Also Published As

Publication number Publication date
CN103530907A (en) 2014-01-22

Similar Documents

Publication Publication Date Title
CN103530907B (en) Complicated three-dimensional model drawing method based on images
CN110458939B (en) Indoor scene modeling method based on visual angle generation
CN106600667B (en) Video-driven face animation method based on convolutional neural network
CN102592275B (en) Virtual viewpoint rendering method
CN102915559B (en) Real-time transparent object GPU (graphic processing unit) parallel generating method based on three-dimensional point cloud
CN108416840A (en) A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
US20050140670A1 (en) Photogrammetric reconstruction of free-form objects with curvilinear structures
CN110288695A (en) Single-frame images threedimensional model method of surface reconstruction based on deep learning
CN103559737A (en) Object panorama modeling method
CN101916454A (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
WO2014117447A1 (en) Virtual hairstyle modeling method of images and videos
CN108986221A (en) A kind of three-dimensional face grid texture method lack of standardization approached based on template face
CN104077808A (en) Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information
CN110223370A (en) A method of complete human body&#39;s texture mapping is generated from single view picture
CN106127818A (en) A kind of material appearance based on single image obtains system and method
CN103559733A (en) Spherical body drawing method supporting three-dimension data inner viewpoint roaming
CN103646421A (en) Tree lightweight 3D reconstruction method based on enhanced PyrLK optical flow method
CN106251281B (en) A kind of image morphing method based on shape interpolation
CN105809734B (en) A kind of mechanical model three-dimensional modeling method based on multi-angle of view interactive mode
CN111127658A (en) Point cloud reconstruction-based feature-preserving curved surface reconstruction method for triangular mesh curved surface
Zhang et al. Virtual reality design and realization of interactive garden landscape
Phongthawee et al. Nex360: Real-time all-around view synthesis with neural basis expansion
Marniok et al. Real-time variational range image fusion and visualization for large-scale scenes using GPU hash tables
Hu et al. Image-based modeling of inhomogeneous single-scattering participating media
Wang et al. 3D Reconstruction of Plant Based on NeRF

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 518133 23rd floor, Yishang science and technology creative building, Jiaan South Road, Haiwang community Central District, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN ESUN DISPLAY Co.,Ltd.

Patentee after: BEIHANG University

Address before: No. 4001, Fuqiang Road, Futian District, Shenzhen, Guangdong 518048 (B301, Shenzhen cultural and Creative Park)

Patentee before: SHENZHEN ESUN DISPLAY Co.,Ltd.

Patentee before: BEIHANG University