CN103106685B - A kind of abdominal organs three-dimensional visualization method based on GPU - Google Patents

A kind of abdominal organs three-dimensional visualization method based on GPU Download PDF

Info

Publication number
CN103106685B
CN103106685B CN201310015075.2A CN201310015075A CN103106685B CN 103106685 B CN103106685 B CN 103106685B CN 201310015075 A CN201310015075 A CN 201310015075A CN 103106685 B CN103106685 B CN 103106685B
Authority
CN
China
Prior art keywords
images
abdominal
ray cast
volume data
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310015075.2A
Other languages
Chinese (zh)
Other versions
CN103106685A (en
Inventor
姜慧研
项飞
邹坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201310015075.2A priority Critical patent/CN103106685B/en
Publication of CN103106685A publication Critical patent/CN103106685A/en
Application granted granted Critical
Publication of CN103106685B publication Critical patent/CN103106685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Based on an abdominal organs three-dimensional visualization method of GPU, belong to technical field of medical image processing, computer tomography equipment carries out CT scan to human abdomen; Obtain abdominal CT images, and three-dimensional visualization process is carried out to the abdominal CT images obtained, obtain the result images of three-dimensional visualization process; Corresponding point matching interpolation is carried out to abdominal CT images and obtains new interpolation image, obtain the abdominal CT images after matching interpolation and volume data by new interpolation image and former abdominal CT images; Carry out ray cast, obtain abdominal CT images ray cast result images; The result images of the three-dimensional visualization process of abdominal CT images is shown on the computer screen that independent GPU is housed.To row interpolation advanced during medical image three-dimensional visualization process, make the interlayer of faultage image every reduction; Carry out ray cast based on GPU, utilize the high concurrent characteristic of GPU, effectively reduce computing time.

Description

A kind of abdominal organs three-dimensional visualization method based on GPU
Technical field
The invention belongs to technical field of medical image processing, be specifically related to a kind of abdominal organs three-dimensional visualization method based on GPU.
Background technology
Medical image three-dimensional visualization technique refers to based on Visualization of Scientific Computing, from a series of two-dimentional tomography medical image, reconstruct 3-D view, utilize Computer display and complete 3-D view operation technology.First, three-dimensional reconstruction is carried out to medical volume datas such as CT, MRI and obtains three-dimensional model, from all directions, Projection Display is carried out to the drawing result of volume data, be convenient to doctor observe body structures from different directions, quantitative description can be obtained by assist physician to the size shape of area-of-interest or organ of interest and locus, help doctor to understand complicated anatomical detail, eye impressions are intuitively provided.On this basis in conjunction with practical clinical, realize the mutual of computing machine and user, arbitrary plane cutting is carried out to visualization result, can help to realize the auxiliary diagnosis operations such as surgical simulation, surgery planning and virtual endoscope.Improve the accuracy rate of diagnosis and the work efficiency of hospital.Body structures is very complicated, usual doctor cannot observe the actual conditions of performing the operation and carrying out, and operation has high risk, can not simulate on human body in advance, utilize visualization technique, computer simulation carried out on the 3-D view basis reconstructed, can design, select best operation plan.Can also be monitored on screen operation situation of carrying out in surgical procedure, make doctors accomplish to know what's what, thus be improved the success ratio of operation.Secondly, current domestic most hospital still adopts traditional film forms to help diagnosis, and it is very large problem that a large amount of films not only stores, and is also no small spending.Realize hospital digitisation, not only facilitate diagnosis, hospital management, more can cut down expenses.Thus medical image three dimensional visualization is for promoting that the development of image procossing and visualization technique and application will play positive impetus, studies and realizes the three-dimensional visualization of two-dimensional medical images, having important value.
Medical data visualized algorithm is divided into two large classes, based on iso-surface patch algorithm and the volume rendering algorithm of geometric graphic element in the middle of structure.Iso-surface patch needs the surface model first rebuilding object from volume data, then utilizes traditional graph to learn a skill and the drafting on hardware implementing surface and display; Volume drawing is then direct take voxel as elementary cell, and application visual theory, by synthesizing generation 3-D view to volume data resampling.
Iso-surface patch is from 3D data fields, extract meaningful and a kind of important means that is visual information.It is actually and volume data is converted to one approaches face and represent, thus can utilize computer graphics techniques further, and even existing hardware-accelerated technology completes the extraction of information of interest.Because it represents such intermediate conversion process by means of face, instead of directly volume data trend of purchasing screen is drawn, we are referred to as again indirect volume drawing sometimes.What this three-dimensional geometry model of expressing was the most frequently used is exactly surface model, generally with planar chip particularly tri patch approach and represent.The method that iso-surface patch is rebuild can be divided into the resurfacing based on profile and the resurfacing based on voxel, and the resurfacing most based on voxel is representational is Cuberille algorithm, Marching Cubes algorithm and Dividing Cubes algorithm.
Volume drawing semi-transparently projects on 2D screen by whole data fields in some way, not by middle geometric figure.Directly produce the two dimensional image on screen by 3 d data field.This algorithm can produce the general image of 3 d data field, comprises each details, and have picture quality high, be convenient to the advantages such as parallel processing.Its subject matter is, calculated amount is very large, and is difficult to utilize traditional graphic hardware to realize drawing, and thus computing time is longer.The volume rendering algorithm being applied to medical volume data at present mainly contains ray cast (Raycasting) algorithm based on image space, based on sputtering (Splatting) algorithm of object space scanning, (shear-warp) algorithm is cut-be out of shape to mistake, throe-dimensional temperature (the hardware-assisted 3D texture mapping) algorithm that hardware is auxiliary.
Object plotting method can be calculated by integral function, and the Direct volume rendering of light projecting algorithm to be a kind of with image be sequence, is proposed in 1988 by M.Levoy the earliest, is also most fundamental sum algorithm the most flexibly in volume drawing.Its basic thought and feature are: first for the pixel will drawn in image, according to viewpoint direction, launch a ray, a single beam line is projected to object through pixel center, along ray, from running into volume data, calculate and integrate the optical properties of the volume data run into.The method that light projecting algorithm is adopted as each voxel distribution transparency and color value carrys out composograph, is therefore conducive to the details retaining image, draws the image of high-quality, be specially adapted to drawing area feature Fuzzy, the image that voxel feature correlation is high.But because need to travel through each voxel, and when direction of observation changes, the context between the sampled point in data also changes, and so just need to carry out resampling and calculating, therefore calculated amount is larger.
Along with the development of Medical Imaging Technology, imaging resolution promotes day by day.The drafting each time of light quantum mechanics all needs to calculate whole volume data and process, process data volume is huge, and traditional abdominal organs three-dimension visible sysem primarily of based on CPU programming language (as C C++ etc.) realize, working time is slower, lack the real-time that may be used for medical diagnosis, make this system have certain restriction in actual applications.Therefore the lifting in system in visualized algorithm performance, has very important value for whole medical diagnosis work and computer graphic image work for the treatment of.
In abdominal organs visualization process, there is the concurrency of height, therefore applicable GPU accelerates.Compared with CPU, GPU has parallel computing architecture and overlength streamline, although this framework is processing the low volume data hourly velocity CPU higher not as good as frequency, in process if the mass data hourly velocity such as medical image are considerably beyond CPU.GPU uses SIMD (Single Instruction Multiple Data, single-instruction multiple-data stream (SIMD)) technology simultaneously, only need write single arithmetic operation can realize parallel computation process for needing the data of parallel computation.In sum based on the abdominal organs three-dimension visible sysem of GPU relative to tradition this system based on CPU, operation efficiency is higher, can ensure the real-time of system, has higher using value.
GLSL (The OpenGL Shading Language) is the abbreviation of OpenGL shading language, and it is a kind of language for creating OpenGL tinter.As a part for OpenGL 2.0 standard, GLSL allows the operation carried out when the explicit designated treatment summit of application program and process fragment.The abdominal organs three-dimensional visualization method based on GPU can be realized, the real-time of elevator system by GLSL.
Summary of the invention
For prior art Problems existing, the invention provides a kind of abdominal organs three-dimensional visualization method based on GPU.
Technical scheme of the present invention is:
Based on an abdominal organs three-dimensional visualization method of GPU, comprise the following steps:
Step 1: computer tomography equipment carries out CT scan to human abdomen;
Step 2: obtain abdominal CT images, often organizes abdominal CT images and comprises several belly tomoscan images;
Step 3: three-dimensional visualization process is carried out to the abdominal CT images obtained, obtains the result images of three-dimensional visualization process;
Step 3.1: carry out corresponding point matching interpolation to abdominal CT images, obtains new interpolation image, obtains the abdominal CT images after matching interpolation and volume data by new interpolation image and former abdominal CT images between the belly tomoscan image of two, every adjacent front and back;
Step 3.2: launch virtual ray via each pixel the initial blank image of ray cast in volume data from the virtual view of people in computing machine, carry out ray cast, calculate the ray cast display result of each pixel, the set of all pixel ray cast display results forms abdominal CT images ray cast result images;
Step 3.3: carry out arbitrary plane cutting to volume data, regions of non-interest excised from volume data, carries out ray cast to the volume data after cutting, obtains the ray cast result images cutting rear volume data;
Step 3.4: if desired simultaneously show multiple volume data, then carry out multi-voxel proton ray cast, the ray cast result images of different volume data is blended into the result images as three-dimensional visualization process in same result images, otherwise directly using the result images of the ray cast result images of volume data after cutting as three-dimensional visualization process;
Step 4: the result images of the three-dimensional visualization process of abdominal CT images is shown on the computer screen that independent GPU is housed.
Described step 3.1 pair abdominal CT images carries out corresponding point matching interpolation, new interpolation image is obtained between the belly tomoscan image of two, every adjacent front and back, obtain the abdominal CT images after matching interpolation and volume data by new interpolation image and former abdominal CT images, concrete steps are as follows:
Step 3.1.1: carry out matching interpolation to abdominal CT images, obtains new interpolation image between the belly tomoscan image of two, every adjacent front and back;
Step 3.1.2: to the institute on each new interpolation image a little, calculate the gray scale difference of corresponding point in the belly tomoscan image of two, its adjacent front and back, and set gray difference threshold;
Step 3.1.3: judge whether the gray scale difference calculated is greater than gray difference threshold: be, then setting search windows radius and gray threshold, in the belly tomoscan image of two, adjacent front and back respectively centered by the corresponding point of these two images with the matching window that search window radius is radius, in matching window, choose optimal match point to determine the gray-scale value of current point; Otherwise, the gray-scale value of current point is asked by cubic spline interpolation;
Step 3.1.4: obtain on each new interpolation image gray-scale value a little;
Step 3.1.5: by former abdominal CT images gray-scale value and each new interpolation image gray-scale value a little, obtain the abdominal CT images after matching interpolation and volume data, all pixels in volume data form a three-dimensional data fields.
Described step 3.2 launches virtual ray via each pixel the initial blank image of ray cast from the virtual view of people in computing machine in volume data, carry out ray cast, calculate the ray cast display result of each pixel, the set of all pixel ray cast display results forms abdominal CT images ray cast result images, and concrete steps are as follows:
Step 3.2.1: volume data is loaded in the buffer memory of GPU as three-D grain;
Step 3.2.2: setpoint color transport function and transparency transport function, and these two transport functions are respectively loaded in the buffer memory of GPU as one dimension texture;
Step 3.2.3: launch virtual ray via each pixel the initial blank image of ray cast from the virtual view of people in computing machine in volume data, carry out ray cast;
Step 3.2.4: the light of all projections of parallel processing in GPU, resampling is carried out along light, sampled result is synthesized, and color and the opacity of each pixel is calculated according to the color transfer function set and transparency transport function, namely obtain the ray cast display result of each pixel;
Step 3.2.5: the set of all ray cast display results forms abdominal CT images ray cast result images.
Described step 3.3 pair volume data carries out arbitrary plane cutting, regions of non-interest is excised from volume data, and carry out ray cast to the volume data after cutting, obtain the ray cast result images cutting rear volume data, concrete steps are as follows:
Step 3.3.1: user is selected four points in volume data, to determine cutting planes;
Step 3.3.2: the pixel gray-scale value be positioned in volume data outside cutting planes is set to 0, excises from volume data by regions of non-interest;
Step 3.3.3: launch virtual ray via each pixel abdominal CT images in volume data from the virtual view of people in computing machine, calculate the ray cast display result of each pixel, the set of all pixel ray cast display results forms the ray cast result images of the rear volume data of cutting.
Carry out multi-voxel proton ray cast in described step 3.4, the ray cast result images of different volume data is blended into the result images as three-dimensional visualization process in same result images, and concrete grammar is as follows:
Multiple volume data is placed in the same space, and for the non-overlapping region of multiple volume data, the result images of multi-voxel proton ray cast is consistent with the result images of abdominal CT images ray cast; For the overlapping region of multiple volume data, setting weight factor ω, the overlapping region result images I computing formula of multiple volume data is as follows:
I = ω·I 1+ (1 – ω·α 1)·I 2
Wherein, I 1with I 2represent the ray cast result images of two volume datas overlapped respectively, α 1represent the opacity in piece image, if overlapping region comprises two or more volume data, then first ask the result of two polymer data, the result obtained is calculated with the 3rd individual data items, by that analogy again.
Setting search windows radius in described step 3.1.3 and gray threshold, in the belly tomoscan image of two, adjacent front and back respectively centered by corresponding point with the matching window that search window radius is radius, in matching window, choose optimal match point to determine the gray-scale value of current point, concrete steps are as follows:
Step 3.1.3.1: any one of two, adjacent front and back belly tomoscan image is set to target image, and another is set to reference picture, and sets distinctiveness ratio weight factor, gray scale in target image matching window is greater than to the some P of gray threshold k, calculate every bit P in this point and reference picture matching window k+1distinctiveness ratio, and the point selecting distinctiveness ratio minimum is as P koptimal match point;
Step 3.1.3.2: exchange reference picture and target image, re-execute step 3.1.3.1, the optimal match point consistent for two times result carries out linear interpolation, obtains the gray-scale value of new interpolation image mid point, the gray-scale value of interpolation image mid point otherwise use cubic spline interpolation is looked for novelty.
The light of all projections of described step 3.2.4 parallel processing in GPU, resampling is carried out along light, sampled result is synthesized, and color and the opacity of each pixel is calculated according to the color transfer function set and transparency transport function, namely obtain the ray cast display result of each pixel, concrete steps are as follows:
Step 3.2.4.1: for a throw light L, obtains its projection coordinate at data fields incidence point and the projection coordinate of eye point;
Step 3.2.4.2: setting sampling interval, namely after throw light L enters data fields at interval of sampling interval distance samples once;
Step 3.2.4.3: for a sampled point P, is converted to model local coordinate α by its projection coordinate β;
Step 3.2.4.4: carry out three-D grain to the model local coordinate α of sampled point P in data fields and search, obtains P point gray-scale value;
Step 3.2.4.5: carry out one dimension texture lookups respectively to P point gray-scale value in color transfer function and transparency transport function, obtains color and the transparency of P point;
Step 3.2.4.6: carry out opacity correction to transparency, eliminates the overstocked over-sampling phenomenon that may cause of sampled point;
Step 3.2.4.7: the final color I asking sampled point P according to Phone illumination model;
Step 3.2.4.8: be averaging the color that color shows as respective pixel in result images to the color of sampled points all on throw light L, revised transparency summation obtains the transparency of respective pixel.
The three-dimension visible sysem that the described abdominal organs three-dimensional visualization method based on GPU adopts, comprises computing machine and computer tomography equipment that independent GPU is housed;
Described computer tomography equipment for obtaining abdominal CT images, and transmits it to the computing machine that independent GPU is housed;
The described computing machine that independent GPU is housed is for carrying out three-dimensional visualization image real time transfer to abdominal CT images.
Beneficial effect:
The present invention, to row interpolation advanced during medical image three-dimensional visualization process, makes the interlayer of faultage image every reduction; Carry out ray cast based on GPU, utilize the high concurrent characteristic of GPU, effectively reduce computing time; Introduce Phone illumination model and improve volume drawing effect, strengthen the sense of reality; To whole body, some shortcomings that visual and single volume data draws and disadvantage are carried out for original to the multi-voxel proton ray cast of many volume datas, as arranged transport function to all volume data unifications of whole body, there is no specific aim, effect of visualization is not obvious, utilize the abdominal organs split, show respectively, transport function is set respectively, effectively abdominal organs is carried out visual, can show the detailed information of different abdominal organs; The present invention finally provides the implementation method of the arbitrary plane cutting of three-dimensional data field, and the internal information of antimer data that can be abundanter meets the needs of doctor to picture quality and interactive performance.
Accompanying drawing explanation
Fig. 1 is the ray cast schematic diagram of the specific embodiment of the present invention;
Fig. 2 is the ray cast process flow diagram of tradition based on CPU;
Fig. 3 is the ray cast process flow diagram of the method for the specific embodiment of the present invention;
Fig. 4 is the abdominal organs three-dimensional visualization method process flow diagram based on GPU of the specific embodiment of the present invention;
Fig. 5 is the process flow diagram abdominal CT images obtained being carried out to three-dimensional visualization process of the specific embodiment of the present invention;
Fig. 6 is the ray cast result images of any two angles of the specific embodiment of the present invention;
Fig. 7 is the result images that first group of abdominal CT images of the specific embodiment of the present invention carries out volume data cutting, wherein (a) is crown cutting result images, b () is transversal section cutting result images, c () is sagittal plane cutting result images, (d) ~ (f) is arbitrary plane cutting result images;
Fig. 8 is that first group, the 4th group of the specific embodiment of the present invention and the 5th group of abdominal CT images carry out multi-voxel proton ray cast result images, and wherein, (a) is normal visible result images, and (b) is for highlighting the visualization result image of gall-bladder.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is elaborated.
In present embodiment, the three-dimension visible sysem of employing, comprises computing machine and computer tomography equipment that independent GPU is housed;
Computer tomography equipment for obtaining abdominal CT images, and transmits it to the computing machine that independent GPU is housed;
The computing machine of independent GPU is housed for carrying out three-dimensional visualization image real time transfer to abdominal CT images, hardware configuration is Intel Core2 Duo T6500 CPU, 2GB internal memory, adopts GPU to be NVIDIA GeForce GT130M graphic display card.
Based on an abdominal organs three-dimensional visualization method of GPU, as shown in Figure 4, comprise the following steps:
Step 1: computer tomography equipment carries out CT scan to human abdomen;
Step 2: obtain five groups of abdominal CT images, often organizes abdominal CT images and comprises several belly tomoscan images;
Abdominal CT images totally five groups: first group be CT image for liver, comprise 163 liver tomoscan images, this CT image for liver size is 512 × 512 × 163, and pel spacing is 0.68 × 0.68 × 2mm; Second group is CT image for liver, and comprise 85 liver tomoscan images, this CT image for liver size is 256 × 256 × 85, and interlamellar spacing is 0.68 × 0.68 × 2mm; 3rd group is CT image for liver, and comprise 185 liver tomoscan images, this CT image for liver size is 708 × 706 × 185, and interlamellar spacing is 0.68 × 0.68 × 2mm; 4th group is spleen CT image, comprises 98 spleen tomoscan images, and this spleen CT image size is 512 × 512 × 98, and interlamellar spacing is 0.68 × 0.68 × 2mm; 5th group is gall-bladder CT image, comprises 46 gall-bladder tomoscan images, and this gall-bladder CT image size is 560 × 558 × 46, and interlamellar spacing is 0.68 × 0.68 × 2mm.
Step 3: carry out three-dimensional visualization process to the five groups of abdominal CT images obtained, obtain the result images of three-dimensional visualization process, as shown in Figure 5, step is as follows for flow process:
Step 3.1: corresponding point matching interpolation is carried out to five groups of abdominal CT images, new interpolation image is obtained between the belly tomoscan image of two, every adjacent front and back, obtain the abdominal CT images after matching interpolation and volume data by new interpolation image and former abdominal CT images, concrete steps are as follows:
Step 3.1.1: carry out matching interpolation to abdominal CT images, between the belly tomoscan image of two, every adjacent front and back, obtain new interpolation image, the point on new interpolation image is designated as (x i, y j, z), in the belly tomoscan image of two, its adjacent front and back, corresponding point are designated as (x respectively i, y j, z k), (x i, y j, z k+1);
Step 3.1.2: to the institute on each new interpolation image a little, calculate the gray scale difference D of corresponding point in the belly tomoscan image of two, its adjacent front and back ij, and to set gray difference threshold T be 10;
Step 3.1.3: judge the gray scale difference D calculated ijwhether be greater than gray difference threshold T: be, then setting search windows radius W=5 and gray threshold V=0, in the belly tomoscan image of two, adjacent front and back respectively centered by the corresponding point of these two images with the matching window that search window radius W is radius, in matching window, choose optimal match point to determine the gray-scale value of current point; Otherwise, the gray-scale value of current point is asked by cubic spline interpolation;
Wherein, setting search windows radius W=5 and gray threshold V=0, in the belly tomoscan image of two, adjacent front and back respectively centered by corresponding point with the matching window that search window radius W is radius, in matching window, choose optimal match point to determine the gray-scale value of current point, concrete steps are as follows:
Step 3.1.3.1: any one of two, adjacent front and back belly tomoscan image is set to target image, and another is set to reference picture, and sets distinctiveness ratio weight factor u1=0.25, u2=0.25, u3=0.25, u4=0.25, be greater than the some P of gray threshold V to gray scale in target image matching window k, calculate every bit P in this point and reference picture matching window k+1distinctiveness ratio, and the point selecting distinctiveness ratio minimum is as P koptimal match point;
Ask a P k, P k+1between the C (P of distinctiveness ratio k, P k+1) formula as follows:
C(P k,P k+1)=u 1[f(P k)-f(P k+1)]i e+u 2[g(P k)-g(P k+1)]j e+u 3[θ(P k)-θ(P k+1)]k e+u 4D[P k,P k+1]l e
Wherein, i e, j e, k e, l erepresent the vector of unit length on gray scale, gradient, gradient direction and skew four direction respectively, f (P k) represent the gray-scale value of kth layer point, g (P k) represent the Grad of kth layer point, θ (P k) represent the gradient direction of kth layer point, D [P k, P k+1] represent the distance that two points that kth layer and kth+1 layer carry out calculating project in the horizontal plane;
Step 3.1.3.2: exchange reference picture and target image, re-execute step 3.1.3.1, the optimal match point consistent for two times result carries out linear interpolation, obtains new interpolation image mid point (x i, y j, gray-scale value z), otherwise use cubic spline interpolation to ask point (x i, y j, gray-scale value z).
Step 3.1.4: obtain on each new interpolation image gray-scale value a little;
Step 3.1.5: by former abdominal CT images gray-scale value and each new interpolation image gray-scale value a little, obtain the abdominal CT images after matching interpolation and volume data, all pixels in volume data form a three-dimensional data fields.
All carry out interpolation to five groups of abdominal CT images, the abdominal CT images data volume after interpolation adds one times, effectively can strengthen the effect of three-dimensional visualization.
Step 3.2: launch virtual ray via each pixel the initial blank image of ray cast in volume data from the virtual view of people in computing machine, carry out ray cast, as shown in Figure 1, calculate the ray cast display result of each pixel, the set of all pixel ray cast display results forms abdominal CT images ray cast result images, as shown in Figure 3, concrete steps are as follows for the flow process of ray cast:
Step 3.2.1: volume data is loaded in the buffer memory of GPU as three-D grain;
Step 3.2.2: setpoint color transport function and transparency transport function, and these two transport functions are respectively loaded in the buffer memory of GPU as one dimension texture;
Color transfer function is identical with the form of transparency transport function, is expressed as follows:
f ( x ) = v 1 0 < x &le; T 1 v 2 T 1 < x &le; T 2 . . . . . . v n x > T n - 1
Wherein, f (x) represents color transfer function or transparency transport function, and x represents sampled point gray-scale value, T ifor the threshold value of setting, v ifor the sampled result of setting;
In present embodiment, setpoint color transport function and transparency transport function as follows,
f 1 ( x ) = ( 0,0,0 ) 0 < x &le; 20 ( 0.6,0.1,0.1 ) 20 < x &le; 60
f 2 ( x ) = ( 0,0,0 ) 0 < x &le; 20 ( 0.9,0.6,0.2 ) 20 < x &le; 60
f 3 ( x ) = ( 0,0,0 ) 0 < x &le; 20 ( 0.1,0.6,0.1 ) 20 < x &le; 60
f 4 ( x ) = ( 0,0,0 ) 0 < x &le; 20 100 20 < x &le; 60
Wherein f 1(x), f 2(x), f 3the color transfer function of (x) difference corresponding liver image, spleen image and gall-bladder image, f 4x transparency transport function that () shares for all types image;
Step 3.2.3: launch virtual ray via each pixel the initial blank image of ray cast from the virtual view of people in computing machine in volume data, carry out ray cast;
First set up a blank result images as the initial blank image of ray cast, ask the content of result images by ray cast.
Step 3.2.4: the light of all projections of parallel processing in GPU, resampling is carried out along light, sampled result is synthesized, and color and the opacity of each pixel is calculated according to the color transfer function set and transparency transport function, namely obtain the ray cast display result of each pixel, concrete steps are as follows:
Step 3.2.4.1: for a throw light L, obtains it at data fields incidence point P1 (x 2, y 2, z 2) projection coordinate and eye point P2 (x 1, y 1, z 1) projection coordinate, then throw light L can be expressed as:
L = P 2(x 2, y 2, z 2) – P 1(x 1, y 1, z 1)
Step 3.2.4.2: setting sampling interval delta=0.05, namely after throw light L enters data fields at interval of delta distance samples once;
Step 3.2.4.3: for a sampled point P, its projection coordinate β is converted to model local coordinate α, and formula is as follows:
α = V -1·Proj -1·β
Wherein V is Model-View matrix, and Proj is projection matrix, is all obtained by access GPU, x, y coordinate of β and P 1identical, the z coordinate computing formula of β is as follows:
z = z 1 + delta·(n-1)
Wherein n represents that sampled point P is the n-th sampled point on throw light L;
Step 3.2.4.4: carry out three-D grain to the model local coordinate α of sampled point P in data fields and search, obtains P point gray-scale value G;
Step 3.2.4.5: carry out one dimension texture lookups respectively to G in color transfer function and transparency transport function, obtains color C and the transparency A of P point;
Step 3.2.4.6: opacity correction is carried out to A, eliminate the overstocked over-sampling phenomenon that may cause of sampled point, the formula of opacity correction is as follows:
A’ = 1 - (1 - A) delta·N
Wherein A ' is revised transparency, and N is the oversample factor of artificial setting, in present embodiment, if N=2;
Step 3.2.4.7: the final color I asking sampled point P according to Phone illumination model, formula is as follows:
I = k d I a + k d I l ( N &CenterDot; L 0 ) + k s I l ( V &CenterDot; R ) n s
Wherein k dfor material is to the reflection coefficient of environment, k sfor the specularity factor of material, in the present embodiment, k dwith k sbe C, N represents P point unit normal vector, L 0represent the vector of unit length pointing to light source from respective pixel screen, V represents the direction of observation of P point to virtual view, and R represents the direction of reflection ray, I awith I lrepresent ambient light intensity and the intensity of light source respectively, be artificial setting, I aget 0.15, I lget 1.0, n srepresent high optical index, n s=0.5;
Step 3.2.4.8: the color C that color shows as respective pixel in result images is averaging to the color of sampled points all on throw light L xy, revised transparency summation obtains the transparency A of respective pixel xy.
After carrying out ray cast to first group, second group abdominal CT images, the ray cast result images of any two angles as shown in Figure 6.
Step 3.2.5: the set of all ray cast display results forms abdominal CT images ray cast result images.
As shown in Figure 2, the ray cast based on CPU needs circulation repeatedly to traditional ray cast flow process based on CPU, and time cost is high, and the ray cast based on GPU of present embodiment directly can draw projection result.
Step 3.3: arbitrary plane cutting is carried out to volume data, regions of non-interest is excised from volume data, carry out ray cast to the volume data after cutting, obtain the ray cast result images cutting rear volume data, concrete steps are as follows:
Step 3.3.1: user is selected four points in volume data, and to determine cutting planes, cutting planes the Representation Equation is as follows:
Ax + By + Cz + D = 0
Wherein, A, B, C, D are coefficient, solve cutting planes system of equations obtain according to the coordinate of four points;
Step 3.3.2: the pixel gray-scale value be positioned in volume data outside cutting planes is set to 0, excises from volume data by regions of non-interest;
Step 3.3.3: launch virtual ray via each pixel abdominal CT images in volume data from the virtual view of people in computing machine, calculate the ray cast display result of each pixel, the set of all pixel ray cast display results forms the ray cast result images of the rear volume data of cutting.
The result of volume data cutting is carried out as shown in Figure 7 to first group of abdominal CT images, to have carried out the coronal-plane cutting as shown in Fig. 7 (a), the transversal section cutting as shown in Fig. 7 (b), the sagittal plane cutting as shown in Fig. 7 (c) and Fig. 7 (d), (e), the arbitrary plane cutting shown in (f) respectively.
Step 3.4: if desired simultaneously show multiple volume data, then carry out multi-voxel proton ray cast, the ray cast result images of different volume data is blended into the result images as three-dimensional visualization process in same result images, otherwise directly using the result images of the ray cast result images of volume data after cutting as three-dimensional visualization process;
Carry out multi-voxel proton ray cast, the result of different volume data be blended in same result images, concrete grammar is as follows:
Multiple volume data is placed in the same space, and for the non-overlapping region of multiple volume data, the result images of multi-voxel proton ray cast is consistent with the result images of abdominal CT images ray cast; For the overlapping region of multiple volume data, setting weight factor ω is 0.5, and the overlapping region result images I computing formula of multiple volume data is as follows:
I = ω·I 1+ (1 – ω·α 1)·I 2
Wherein, I 1with I 2represent the ray cast result images of two volume datas overlapped respectively, α 1represent the opacity in piece image, if overlapping region comprises two or more volume data, then first ask the result of two polymer data, the result obtained is calculated with the 3rd individual data items, by that analogy again.
Carry out multi-voxel proton ray cast to first group, the 4th group, the 5th group abdominal CT images, as shown in Figure 8, wherein A represents liver to result images, i.e. first group of abdominal CT images result, B represents spleen, i.e. the 4th group of abdominal CT images result, C represents gall-bladder, i.e. the 5th group of abdominal CT images result.
In the present embodiment, step 3.3 and step 3.4 are optional step, when user does not need cutting body data then without the need to performing step 3.3, similarly, when user does not need to show multiple volume data then without the need to performing step 3.4 simultaneously.
Step 4: the result images of the three-dimensional visualization process of abdominal CT images is shown on the computer screen that independent GPU is housed.

Claims (1)

1., based on an abdominal organs three-dimensional visualization method of GPU, comprise the following steps:
Step 1: computer tomography equipment carries out CT scan to human abdomen;
Step 2: obtain abdominal CT images, often organizes abdominal CT images and comprises several belly tomoscan images;
Step 3: three-dimensional visualization process is carried out to the abdominal CT images obtained, obtains the result images of three-dimensional visualization process;
Step 3.1: carry out corresponding point matching interpolation to abdominal CT images, obtains new interpolation image, obtains the abdominal CT images after matching interpolation and volume data by new interpolation image and former abdominal CT images between the belly tomoscan image of two, every adjacent front and back;
Step 3.2: launch virtual ray via each pixel the initial blank image of ray cast in volume data from the virtual view of people in computing machine, carry out ray cast, calculate the ray cast display result of each pixel, the set of all pixel ray cast display results forms abdominal CT images ray cast result images;
Step 3.2.1: volume data is loaded in the buffer memory of GPU as three-D grain;
Step 3.2.2: setpoint color transport function and transparency transport function, and these two transport functions are respectively loaded in the buffer memory of GPU as one dimension texture;
Step 3.2.3: launch virtual ray via each pixel the initial blank image of ray cast from the virtual view of people in computing machine in volume data, carry out ray cast;
Step 3.2.4: the light of all projections of parallel processing in GPU, resampling is carried out along light, sampled result is synthesized, and color and the opacity of each pixel is calculated according to the color transfer function set and transparency transport function, namely obtain the ray cast display result of each pixel;
Step 3.2.5: the set of all ray cast display results forms abdominal CT images ray cast result images;
Step 3.3: carry out arbitrary plane cutting to volume data, regions of non-interest excised from volume data, carries out ray cast to the volume data after cutting, obtains the ray cast result images cutting rear volume data;
Step 3.4: if desired simultaneously show multiple volume data, then carry out multi-voxel proton ray cast, the ray cast result images of different volume data is blended into the result images as three-dimensional visualization process in same result images, otherwise directly using the result images of the ray cast result images of volume data after cutting as three-dimensional visualization process;
Step 4: the result images of the three-dimensional visualization process of abdominal CT images is shown on the computer screen that independent GPU is housed;
It is characterized in that: the light of all projections of described step 3.2.4 parallel processing in GPU, resampling is carried out along light, sampled result is synthesized, and color and the opacity of each pixel is calculated according to the color transfer function set and transparency transport function, namely obtain the ray cast display result of each pixel, concrete steps are as follows:
Step 3.2.4.1: for a throw light L, obtains its projection coordinate at data fields incidence point and the projection coordinate of eye point;
Step 3.2.4.2: setting sampling interval, namely after throw light L enters data fields at interval of sampling interval distance samples once;
Step 3.2.4.3: for a sampled point P, is converted to model local coordinate α by its projection coordinate β;
Step 3.2.4.4: carry out three-D grain to the model local coordinate α of sampled point P in data fields and search, obtains P point gray-scale value;
Step 3.2.4.5: carry out one dimension texture lookups respectively to P point gray-scale value in color transfer function and transparency transport function, obtains color and the transparency of P point;
Step 3.2.4.6: carry out opacity correction to transparency, eliminates the overstocked over-sampling phenomenon that may cause of sampled point;
Step 3.2.4.7: the final color I asking sampled point P according to Phone illumination model;
Step 3.2.4.8: be averaging the color that color shows as respective pixel in result images to the color of sampled points all on throw light L, revised transparency summation obtains the transparency of respective pixel.
CN201310015075.2A 2013-01-16 2013-01-16 A kind of abdominal organs three-dimensional visualization method based on GPU Active CN103106685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310015075.2A CN103106685B (en) 2013-01-16 2013-01-16 A kind of abdominal organs three-dimensional visualization method based on GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310015075.2A CN103106685B (en) 2013-01-16 2013-01-16 A kind of abdominal organs three-dimensional visualization method based on GPU

Publications (2)

Publication Number Publication Date
CN103106685A CN103106685A (en) 2013-05-15
CN103106685B true CN103106685B (en) 2015-08-12

Family

ID=48314510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310015075.2A Active CN103106685B (en) 2013-01-16 2013-01-16 A kind of abdominal organs three-dimensional visualization method based on GPU

Country Status (1)

Country Link
CN (1) CN103106685B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455970A (en) * 2013-08-30 2013-12-18 天津市测绘院 Method for accelerated display of invisible part by three-dimensional digital urban system model
CN104599311A (en) * 2013-10-31 2015-05-06 镇江华扬信息科技有限公司 GPU (Graphics Processing Unit)-based hybrid visual system of three-dimensional medical image
CN104658028B (en) * 2013-11-18 2019-01-22 清华大学 The method and apparatus of Fast Labeling object in 3-D image
CN104318057A (en) * 2014-09-25 2015-01-28 新乡医学院第一附属医院 Medical image three-dimensional visualization system
CN104992444B (en) * 2015-07-14 2018-09-21 山东易创电子有限公司 A kind of cutting method and system of human body layer data
CN106530382A (en) * 2016-12-09 2017-03-22 江西中科九峰智慧医疗科技有限公司 Data processing method and system for medical three-dimensional image
CN107146262B (en) * 2017-04-18 2021-01-26 广州广华深启科技有限责任公司 Three-dimensional visualization method and system for OCT (optical coherence tomography) image
CN107292865B (en) * 2017-05-16 2021-01-26 哈尔滨医科大学 Three-dimensional display method based on two-dimensional image processing
CN108492299B (en) * 2018-03-06 2022-09-16 天津天堰科技股份有限公司 Cutting method of three-dimensional image
CN109377549A (en) * 2018-09-29 2019-02-22 浙江工业大学 A kind of real-time processing of OCT finger tip data and three-dimensional visualization method
CN109767468B (en) * 2019-01-16 2021-04-20 上海长征医院 Visceral volume detection method and device
CN113347407A (en) * 2021-05-21 2021-09-03 华中科技大学 Medical image display system based on naked eye 3D

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776747A (en) * 2005-11-24 2006-05-24 上海交通大学 GPU hardware acceleration based body drawing method for medical image
CN101794460A (en) * 2010-03-09 2010-08-04 哈尔滨工业大学 Method for visualizing three-dimensional anatomical tissue structure model of human heart based on ray cast volume rendering algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776747A (en) * 2005-11-24 2006-05-24 上海交通大学 GPU hardware acceleration based body drawing method for medical image
CN101794460A (en) * 2010-03-09 2010-08-04 哈尔滨工业大学 Method for visualizing three-dimensional anatomical tissue structure model of human heart based on ray cast volume rendering algorithm

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
医学CT图像的三维分割与可视化研究;张海波;《中国优秀硕士学位论文全文数据库信息科技辑》;20051115(第8期);第13-16页 *
医学断层图像插值重建算法的研究;朱杨兴;《中国优秀硕士学位论文全文数据库医学卫生科技辑》;20070415(第4期);第23-25页 *
基于GPU的医学图像三维可视化技术研究;卜祥磊;《中国优秀硕士学位论文全文数据库信息科技辑》;20100115(第1期);第29-33页 *
基于GPU的器官体数据光线投射算法研究;康健超;《中国优秀硕士学位论文全文数据库信息科技辑》;20110815(第8期);第37-44页 *
自适应光线投射直接体绘制算法及实现;金朝阳 等;《中国医学影像技术》;20050430;第21卷(第4期);第634-638页 *

Also Published As

Publication number Publication date
CN103106685A (en) 2013-05-15

Similar Documents

Publication Publication Date Title
CN103106685B (en) A kind of abdominal organs three-dimensional visualization method based on GPU
CN101110124B (en) System and method for in-context volume visualization using virtual incision
CN107924580A (en) The visualization of surface volume mixing module in medical imaging
CN102930602B (en) Tomography-image-based facial skin three-dimensional surface model reconstructing method
CN101710420B (en) Anti-segmentation method for medical image
CN101814193A (en) Real-time volume rendering method of three-dimensional heart data based on GPU (Graphic Processing Unit) acceleration
CN100348158C (en) Rapid progressive three-dimensional reconstructing method of CT image from direct volume rendering
Kumar et al. 3D reconstruction of face from 2D CT scan images
CN108510580A (en) A kind of vertebra CT image three-dimensional visualization methods
Wilson et al. Interactive multi-volume visualization
CN103745495A (en) Medical volume data based volume rendering method
Tan et al. Design of 3D visualization system based on VTK utilizing marching cubes and ray casting algorithm
Tatarchuk et al. Advanced interactive medical visualization on the GPU
Cai et al. Simulation and visualization of liver cancer ablation focus in optical surgical navigation
Liang et al. Design of an interactive 3D medical visualization system
CN113679417A (en) Model-guided optimization parallel ultrasonic image 3D reconstruction method
Dai et al. Volume-rendering-based interactive 3D measurement for quantitative analysis of 3D medical images
Zhao et al. High-performance and real-time volume rendering in CUDA
CN101937575B (en) Quick, high-quality, maximum-density projection realization method
Wang Three-dimensional medical CT image reconstruction
Zhao et al. 3D Reconstruction of Human Head CT Images Based on VTK
McGraw et al. Hybrid rendering of exploded views for medical image atlas visualization
Madi et al. Modeling and visualization of layered objects
Feng et al. 4D real-time cardiac MDCT image volume rendering method research based on GPU texture mapping
Bai et al. P‐2.16: Research on three‐dimensional reconstruction based on VTK

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant