CN113436305A - High-quality geometric modeling method guided by direct volume rendering - Google Patents

High-quality geometric modeling method guided by direct volume rendering Download PDF

Info

Publication number
CN113436305A
CN113436305A CN202110695142.4A CN202110695142A CN113436305A CN 113436305 A CN113436305 A CN 113436305A CN 202110695142 A CN202110695142 A CN 202110695142A CN 113436305 A CN113436305 A CN 113436305A
Authority
CN
China
Prior art keywords
value
point
opacity
volume rendering
transfer function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110695142.4A
Other languages
Chinese (zh)
Other versions
CN113436305B (en
Inventor
张文耀
曹远招
王成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110695142.4A priority Critical patent/CN113436305B/en
Publication of CN113436305A publication Critical patent/CN113436305A/en
Application granted granted Critical
Publication of CN113436305B publication Critical patent/CN113436305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The present invention relates to a geometric modeling method, and more particularly, to a high-quality geometric modeling method guided by direct volume rendering. The method mainly aims at three-dimensional volume data, reconstructs a three-dimensional geometric model under the guidance of direct volume rendering, and belongs to the field of geometric modeling and three-dimensional reconstruction. The method comprises the steps of firstly, adopting ray projection to directly draw a given volume data to obtain a visual three-dimensional scene; then, constructing a geometric model according to the volume data mesh, and endowing the vertex of the geometric model with attributes such as color, opacity, normal vector and the like according to a transfer function of direct volume rendering; and finally, removing redundant vertexes and patches to obtain a final geometric model. The rendering effect of the model under the illumination condition is very close to the visual effect of direct volume rendering, the surface shape characteristics of an object can be presented, and the detail information in the model can be displayed semi-transparently.

Description

High-quality geometric modeling method guided by direct volume rendering
Technical Field
The present invention relates to a geometric modeling method, and more particularly, to a high-quality geometric modeling method guided by direct volume rendering. The method mainly aims at three-dimensional volume data, reconstructs a three-dimensional geometric model under the guidance of direct volume rendering, and belongs to the field of geometric modeling and three-dimensional reconstruction.
Background
Three-dimensional reconstruction refers to the establishment of mathematical models suitable for computer representation and processing of three-dimensional objects, and has wide application in the fields of computer animation, computer vision, medical image processing, virtual reality and the like.
In the field of medical image processing, three-dimensional reconstruction mainly refers to converting three-dimensional medical image data into graphs or images by using methods such as computer graphics, image processing and the like, displaying the graphs or the images on a screen, and constructing a three-dimensional visual effect of an object so as to analyze three-dimensional structure and morphological information in the objects. At present, the three-dimensional reconstruction of medical images mainly comprises surface reconstruction and volume rendering reconstruction.
Surface reconstruction generally renders the surface contour of an object by constructing a surface model of the object. In a specific implementation, there are surface reconstruction based on image segmentation and surface reconstruction based on iso-surface extraction. In either way, surface reconstruction simply builds a surface model with external shape features that does not contain internal detail information.
Volume rendered reconstruction typically employs some sort of direct volume rendering algorithm (e.g., a ray casting algorithm) to project the three-dimensional information contained in the volume data onto a two-dimensional screen. Compared with surface reconstruction, the volume rendering reconstruction result not only contains the surface structure and shape information of the object, but also can display the detail information inside the object, and is high-quality three-dimensional reconstruction in a certain sense. However, the reconstruction actually only visualizes the three-dimensional scene contained in the volume data, and does not establish a geometric model for actual operation, which is difficult to satisfy the application occasions requiring the actual geometric model, such as virtual reality and mixed reality.
In view of the above, the present invention proposes a high quality geometric modeling approach guided by direct volume rendering. The method comprises the steps of firstly, adopting a Ray Casting (Ray Casting) algorithm (see Levoy M.display of Surfaces from Volume data. IEEE Computer Graphics and Applications, Vol.8 and No.3,1988.) to directly draw a given Volume data to obtain a visual three-dimensional scene; then, constructing a geometric model according to the volume data mesh, and endowing the vertex of the geometric model with attributes such as color, opacity, normal vector and the like according to a transfer function of direct volume rendering; and finally, removing redundant vertexes and patches to obtain a final geometric model. The rendering effect of the model under the illumination condition is very close to the visual effect of direct volume rendering, the surface shape characteristics of an object can be presented, and the detail information in the model can be displayed semi-transparently.
Disclosure of Invention
The invention aims to provide a high-quality geometric modeling method guided by direct volume rendering, so as to solve the problem that the reconstruction of volume rendering can establish a high-quality three-dimensional scene but does not generate an actual geometric model.
The purpose of the invention is realized by the following technical scheme.
A method of high quality geometric modeling guided by direct volume rendering, comprising the steps of:
step 1: inputting three-dimensional volume data V, enabling the serial numbers of all voxels in V to be respectively (i, j, k) (i is more than or equal to 0 and less than W, j is more than or equal to 0 and less than H, k is more than or equal to 0 and less than D), and enabling the coordinate of the center point of the initial voxel to be (x)0,y0,z0) The size of each voxel is Δ x × Δ y × Δ z, and the value of the (i, j, k) th voxel is v (i, j, k).
Step 2: and performing direct volume rendering on the volume data V by adopting a ray projection algorithm to obtain a volume rendering result, wherein an opacity transfer function adopted in the volume rendering process is OTF, a color transfer function is CTF, and a gradient opacity transfer function is GTF.
And step 3: connecting the central points of all the voxels in V into a three-dimensional orthogonal rectangular grid G, and representing the grid points of the grid by G (i, j, k) (i is more than or equal to 0 and less than W, j is more than or equal to 0 and less than H, and k is more than or equal to 0 and less than D).
And 4, step 4: and establishing an initial geometric model M, taking all grid points G (i, j, k) in G as the vertexes of M, and adding rectangles which divide each grid unit in G into M as patches.
And 5: and equally dividing each rectangular patch of the M into t multiplied by t small rectangular patches (t is a positive integer greater than or equal to 1), adding the newly added rectangular lattice points into the M as vertexes, and updating the original rectangular patches in the M into small rectangular patches.
Step 6: for each vertex p of M, the coordinate of the vertex p is (x, y, z), the value of a p point is calculated through interpolation according to the value of volume data V, and then the normal vector, the opacity and the color value of the p point are calculated according to the following steps:
step 6.1: according to the value of the p point and the value of the volume data V, the gradient G of the p point is calculated through interpolationpCalculating the gradient opacity value g of the p point according to the gradient amplitude and the gradient opacity transfer function GTFpAnd G ispNormalized normal vector N as p pointp
Step 6.2: calculating an initial opacity value alpha 'of the p point according to the value of the p point and the volume rendering opacity transfer function OTF'pLet the opacity value alpha of p pointp=α′p*gp
Step 6.3: calculating the color value C of the p point according to the value of the p point and the volume rendering color transfer function CTFp
And 7: and deleting the rectangular patch of which the opacity values of all four vertexes in the M are 0.
And 8: all isolated vertices in M are deleted.
And step 9: a geometric model M is output that is composed of vertices and patches, and vertex attribute data (including normal vectors, opacity, and color values).
Advantageous effects
The high-quality geometric modeling method guided by direct volume rendering has the following advantages and characteristics in the aspects of realization technology and modeling effect:
(1) the method combines the advantages and the characteristics of volume rendering reconstruction and surface reconstruction, and the geometric model established according to the method not only can present the surface shape characteristics of the object, but also can display the detail information in the object in a semitransparent way.
(2) The method carries out modeling under the guidance of direct volume rendering, the rendering effect of the built geometric model is very close to the volume rendering effect of original volume data, high-quality geometric modeling is realized, and the problem that no practical operable geometric model is generated in volume rendering reconstruction is solved.
(3) The method does not involve any image segmentation step, and avoids the problem of reduced modeling quality caused by inaccurate image segmentation.
Drawings
FIG. 1 is a flow chart of a method of high quality geometric modeling guided by direct volume rendering;
FIG. 2 is an example of a two-dimensional slice of brain volume data, where (a), (b), and (c) are vertical sectional images in different axes, respectively;
fig. 3 shows an opacity transfer function OTF used in the direct volume rendering of brain volume data, and a color transfer function CTF, where the same transfer function is used for both color components R, G and B of the CTF;
fig. 4 direct volume rendering of brain volume data, where (a), (b), and (c) are respectively volume rendering result views at different viewing angles;
fig. 5 shows a geometric model of the brain constructed according to the method of the present invention, wherein (a), (b) and (c) are views of the model from different perspectives, respectively;
fig. 6 is a geometric brain model established according to a conventional surface modeling method based on image segmentation, in which (a), (b), and (c) are model views at different viewing angles, respectively;
FIG. 7 volume rendering results of engine volume data;
FIG. 8 engine volumetric data geometric modeling results.
Detailed description of the invention
The following description of the embodiments of the present invention is provided in connection with the accompanying drawings and examples.
Fig. 1 shows a flow chart of a high-quality geometric modeling method guided by direct volume rendering according to the present invention, which mainly comprises the following steps:
step S1: inputting three-dimensional volume data V, enabling the serial numbers of all voxels in V to be respectively (i, j, k) (i is more than or equal to 0 and less than W, j is more than or equal to 0 and less than H, k is more than or equal to 0 and less than D), and enabling the coordinate of the center point of the initial voxel to be (x)0,y0,z0) Each voxel has a size Δ x Δ y Δ zThe value of the (i, j, k) th voxel is v (i, j, k).
The three-dimensional volume data V input in the present embodiment is a personal head volume data selected from a public data set atlas (http:// nist. mni. mcgill. ca/. The volumetric data was obtained from T1 modality MRI scan data of 152 healthy persons by weighted averaging and is referred to as the MNI 152. Brain volume data was obtained by extracting the brain from this volume data using the BET tool in the FSL software (see Jenkinson M, et al. FSL. neuroimage, vol.62, No.2,2012.), with the number of voxels 182 × 218 × 182, the size of each voxel 1mm × 1mm × 1mm, and the coordinate value of the starting voxel center point (0,0, 0). The brain volume data is scalar field data, and the value range of the voxel is [0,8364 ]. Fig. 2 shows 3 two-dimensional slice images of the volume data.
Step S2: and performing direct volume rendering on the volume data V by adopting a ray projection algorithm to obtain a volume rendering result, wherein an opacity transfer function adopted in the volume rendering process is OTF, a color transfer function is CTF, and a gradient opacity transfer function is GTF.
In this embodiment, when the volume data V is directly volume-rendered, the adopted opacity transfer function is OTF and the color transfer function is CTF, as shown in fig. 3; while the gradient opacity transfer function GTF uses a constant function with a value of 1. The direct volume rendering result obtained in this step of the present embodiment is shown in fig. 4.
Step S3: connecting the central points of all the voxels in V into a three-dimensional orthogonal rectangular grid G, and representing the grid points of the grid by G (i, j, k) (i is more than or equal to 0 and less than W, j is more than or equal to 0 and less than H, and k is more than or equal to 0 and less than D).
Step S4: and establishing an initial geometric model M, taking all grid points G (i, j, k) in G as the vertexes of M, and adding rectangles which divide each grid unit in G into M as patches.
Step S5: and equally dividing each rectangular patch of the M into t multiplied by t small rectangular patches (t is a positive integer greater than or equal to 1), adding the newly added rectangular lattice points into the M as vertexes, and updating the original rectangular patches in the M into small rectangular patches.
The present embodiment sets the parameter t to 1 at this step. In this case, it is not necessary to subdivide the rectangular patches in the model, which corresponds to skipping the process of this step. If the parameter t is greater than 1, the rectangular patch of the model is subdivided in this step, so as to improve the accuracy and quality of the model.
Step S6: for each vertex p of M, the coordinate of the vertex p is (x, y, z), the value of a p point is calculated through interpolation according to the value of volume data V, and then the normal vector, the opacity and the color value of the p point are calculated according to the following steps:
step S6-1: according to the value of the p point and the value of the volume data V, the gradient G of the p point is calculated through interpolationpCalculating the gradient opacity value g of the p point according to the gradient amplitude and the gradient opacity transfer function GTFpAnd G ispNormalized normal vector N as p pointp
Step S6-2: calculating an initial opacity value alpha 'of the p point according to the value of the p point and the volume rendering opacity transfer function OTF'pLet the opacity value alpha of p pointp=α′p*gp
Step S6-3: calculating the color value C of the p point according to the value of the p point and the volume rendering color transfer function CTFp
Step S7: and deleting the rectangular patch of which the opacity values of all four vertexes in the M are 0.
Step S8: all isolated vertices in M are deleted.
The isolated vertices referred to in step S8 refer to vertices that are not connected by an existing rectangular patch in M.
Step S9: a geometric model M is output that is composed of vertices and patches, and vertex attribute data (including normal vectors, opacity, and color values).
Note that the geometric model obtained in step S9 is composed of vertex coordinates, rectangular patches connecting the vertices, and attribute data of the vertices. The attribute data for the vertices includes normal vectors, opacity, and color values. The vertex attribute data will determine the rendering result of the geometric model
The result of the rendering of the geometric model obtained in step S9 is shown in fig. 5, in which the sulcus structure of the brain is clearly visible, and the rendering effect of the model is very close to the direct volume rendering result of fig. 4. The method of the invention achieves high-quality geometric modeling in this embodiment.
In order to illustrate the advantages and features of the method of the present invention, a surface reconstruction method based on image segmentation is adopted to perform surface reconstruction on the three-dimensional volume data V input in this embodiment, and the obtained modeling result is shown in fig. 6.
As can be seen by comparing fig. 4, 5 and 6, the brain model shown in fig. 5 has clearer surface contour and sulcus-back details, and is very similar to the direct volume rendering result of fig. 4; while the brain model shown in fig. 6 has some cases where the structures of the sulcus gyrus are blurred or lost. The problem of fig. 6 is due to the inaccuracy of image segmentation on the one hand and the fact that the model is a simple surface model on the other hand, because a single surface model cannot express information inside the model.
To illustrate the feasibility and versatility of the method of the present invention, it was further tested using an engine volume data. The volume data is the CT scan result of an engine model, and has a size of 256 × 256 × 256, and the value range of the voxel is [0,255 ]. The direct volume rendering result of the engine volume data is shown in fig. 7, where the outer layer is a semi-transparent outline, and the parts inside the engine can be clearly seen through the outer layer outline. For ease of identification, fig. 7 uses different hue color transfer functions for the different components of the inner and outer layers. Geometric modeling of the engine volume data by the method of the present invention is performed, and with the volume rendering result shown in fig. 7 as the target, the parameter t is set to 2 in step S5, and the geometric model shown in fig. 8 is obtained. Comparing fig. 8 and fig. 7, it can be seen that the geometric modeling established by the method of the present invention is very close to the volume rendering effect, the shapes of objects at different levels inside and outside are shown through semi-transparent processing, and simultaneously, very rich model details are provided. However, it should be noted that the translucency of FIG. 8 is somewhat different than that of FIG. 7. This is where the process of the present invention is to be further improved and enhanced.
The above steps and examples illustrate the overall process of a high quality geometric modeling approach guided by direct volume rendering according to the present invention.
It should be understood that the present embodiments are only specific examples for implementing the invention, and should not be used for limiting the protection scope of the invention. It is intended that all equivalent modifications and variations of the above-described aspects be included within the scope of the present invention as claimed, without departing from the spirit and scope of the invention.

Claims (1)

1. A method of high quality geometric modeling guided by direct volume rendering, comprising the steps of:
step 1: inputting three-dimensional volume data V, enabling the serial numbers of all voxels in V to be respectively (i, j, k) (i is more than or equal to 0 and less than W, j is more than or equal to 0 and less than H, k is more than or equal to 0 and less than D), and enabling the coordinate of the center point of the initial voxel to be (x)0,y0,z0) The size of each voxel is Δ x × Δ y × Δ z, and the value of the (i, j, k) th voxel is v (i, j, k);
step 2: adopting a ray projection algorithm to perform direct volume rendering on the volume data V to obtain a volume rendering result, and enabling an opacity transfer function adopted in the volume rendering process to be OTF, a color transfer function to be CTF and a gradient opacity transfer function to be GTF;
and step 3: connecting the central points of all the voxels in V into a three-dimensional orthogonal rectangular grid G, and representing the grid points of the grid by G (i, j, k) (i is more than or equal to 0 and less than W, j is more than or equal to 0 and less than H, and k is more than or equal to 0 and less than D);
and 4, step 4: establishing an initial geometric model M, taking all grid points G (i, j, k) in G as the vertexes of M, and taking rectangles of each grid unit divided in G as patches to be added into M;
and 5: dividing each rectangular patch of M into t multiplied by t small rectangular patches (t is a positive integer greater than or equal to 1), adding the newly added rectangular lattice points into M as vertexes, and updating the original rectangular patches in M into small rectangular patches;
step 6: for each vertex p of M, the coordinate of the vertex p is (x, y, z), the value of a p point is calculated through interpolation according to the value of volume data V, and then the normal vector, the opacity and the color value of the p point are calculated according to the following steps:
step 6.1: according toThe value of the p point and the value of the volume data V are interpolated to calculate the gradient G of the p pointpCalculating the gradient opacity value g of the p point according to the gradient amplitude and the gradient opacity transfer function GTFpAnd G ispNormalized normal vector N as p pointp
Step 6.2: calculating an initial opacity value alpha 'of the p point according to the value of the p point and the volume rendering opacity transfer function OTF'pLet the opacity value alpha of p pointp=α′p*gp
Step 6.3: calculating the color value C of the p point according to the value of the p point and the volume rendering color transfer function CTFp
And 7: deleting the rectangular patch of which the opacity values of all four vertexes in M are 0;
and 8: deleting all isolated vertexes in M;
and step 9: a geometric model M is output that is composed of vertices and patches, and vertex attribute data (including normal vectors, opacity, and color values).
CN202110695142.4A 2021-06-23 2021-06-23 High-quality geometric modeling method guided by direct volume rendering Active CN113436305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110695142.4A CN113436305B (en) 2021-06-23 2021-06-23 High-quality geometric modeling method guided by direct volume rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110695142.4A CN113436305B (en) 2021-06-23 2021-06-23 High-quality geometric modeling method guided by direct volume rendering

Publications (2)

Publication Number Publication Date
CN113436305A true CN113436305A (en) 2021-09-24
CN113436305B CN113436305B (en) 2023-05-12

Family

ID=77757274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110695142.4A Active CN113436305B (en) 2021-06-23 2021-06-23 High-quality geometric modeling method guided by direct volume rendering

Country Status (1)

Country Link
CN (1) CN113436305B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499323A (en) * 1993-06-16 1996-03-12 International Business Machines Corporation Volume rendering method which increases apparent opacity of semitransparent objects in regions having higher specular reflectivity
US5570460A (en) * 1994-10-21 1996-10-29 International Business Machines Corporation System and method for volume rendering of finite element models
US6573893B1 (en) * 2000-11-01 2003-06-03 Hewlett-Packard Development Company, L.P. Voxel transfer circuit for accelerated volume rendering of a graphics image
CN101373541A (en) * 2008-10-17 2009-02-25 东软集团股份有限公司 Method and apparatus for drafting medical image body
CN102999936A (en) * 2012-11-19 2013-03-27 北京中海新图科技有限公司 Three-dimensional streamline volume rendering algorithm based on ocean flow field data
CN104167013A (en) * 2014-08-04 2014-11-26 清华大学 Volume rendering method for highlighting target area in volume data
CN108460835A (en) * 2018-03-05 2018-08-28 北京理工大学 A method of geometric object is incorporated into volume drawing result
CN109544688A (en) * 2018-11-22 2019-03-29 北京理工大学 A kind of volume drawing fusion method based on opacity transfer function
CN110211216A (en) * 2019-06-14 2019-09-06 北京理工大学 A kind of 3-D image airspace fusion method based on the weighting of volume drawing opacity
CN110211207A (en) * 2019-06-14 2019-09-06 北京理工大学 A kind of three-dimensional flow field method for visualizing to be added up based on streamline length

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499323A (en) * 1993-06-16 1996-03-12 International Business Machines Corporation Volume rendering method which increases apparent opacity of semitransparent objects in regions having higher specular reflectivity
US5570460A (en) * 1994-10-21 1996-10-29 International Business Machines Corporation System and method for volume rendering of finite element models
US6573893B1 (en) * 2000-11-01 2003-06-03 Hewlett-Packard Development Company, L.P. Voxel transfer circuit for accelerated volume rendering of a graphics image
CN101373541A (en) * 2008-10-17 2009-02-25 东软集团股份有限公司 Method and apparatus for drafting medical image body
CN102999936A (en) * 2012-11-19 2013-03-27 北京中海新图科技有限公司 Three-dimensional streamline volume rendering algorithm based on ocean flow field data
CN104167013A (en) * 2014-08-04 2014-11-26 清华大学 Volume rendering method for highlighting target area in volume data
CN108460835A (en) * 2018-03-05 2018-08-28 北京理工大学 A method of geometric object is incorporated into volume drawing result
CN109544688A (en) * 2018-11-22 2019-03-29 北京理工大学 A kind of volume drawing fusion method based on opacity transfer function
CN110211216A (en) * 2019-06-14 2019-09-06 北京理工大学 A kind of 3-D image airspace fusion method based on the weighting of volume drawing opacity
CN110211207A (en) * 2019-06-14 2019-09-06 北京理工大学 A kind of three-dimensional flow field method for visualizing to be added up based on streamline length

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NA WANG 等: "GIMI: A New Evaluation Index for 3D Multimodal Medical Image Fusion", 《2018 14TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS)》 *
向俊利 等: "面向台风数据体绘制的传输函数研究", 《计算机应用与软件》 *
夏冰心: "基于纹理映射的三维地震数据可视化方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
徐志敬: "基于VTK的三维流场可视化算法设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Also Published As

Publication number Publication date
CN113436305B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
TWI618030B (en) Method and system of graphics processing enhancement by tracking object and/or primitive identifiers, graphics processing unit and non-transitory computer readable medium
US9582923B2 (en) Volume rendering color mapping on polygonal objects for 3-D printing
US9367943B2 (en) Seamless fracture in a production pipeline
US20020009224A1 (en) Interactive sculpting for volumetric exploration and feature extraction
JP2006198060A (en) Image processing method and image processing program
CN101625766A (en) Method for processing medical images
JP6215057B2 (en) Visualization device, visualization program, and visualization method
JPH02899A (en) Graphic display device and generation of three-dimensional image of object
JP4885042B2 (en) Image processing method, apparatus, and program
US9846973B2 (en) Method and system for volume rendering color mapping on polygonal objects
US9342913B2 (en) Method and system for emulating inverse kinematics
JP7247577B2 (en) 3D reconstructed image display device, 3D reconstructed image display method, program, and image generation method
CN113436305B (en) High-quality geometric modeling method guided by direct volume rendering
van Almsick et al. GPU-based ray-casting of spherical functions applied to high angular resolution diffusion imaging
CN112233791B (en) Mammary gland prosthesis preparation device and method based on point cloud data clustering
JP2012221448A (en) Method and device for visualizing surface-like structures in volume data sets
Stewart et al. Rebuilding the visible man
Tang et al. A virtual reality-based surgical simulation system for virtual neuroendoscopy
Cheng et al. Research on medical image three dimensional visualization system
JP4292645B2 (en) Method and apparatus for synthesizing three-dimensional data
Eichelbaum et al. Image-space tensor field visualization using a LIC-like method
Trapp et al. Interactive close-up rendering for detail+ overview visualization of 3d digital terrain models
Oh et al. Acceleration technique for volume rendering using 2D texture based ray plane casting on GPU
Ghobadi et al. Ray casting based volume rendering of medical images
CN114882195A (en) Medical image processing method based on three-dimensional visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant