CN101593345A - Three-dimensional medical image display method based on the GPU acceleration - Google Patents
Three-dimensional medical image display method based on the GPU acceleration Download PDFInfo
- Publication number
- CN101593345A CN101593345A CNA200910059864XA CN200910059864A CN101593345A CN 101593345 A CN101593345 A CN 101593345A CN A200910059864X A CNA200910059864X A CN A200910059864XA CN 200910059864 A CN200910059864 A CN 200910059864A CN 101593345 A CN101593345 A CN 101593345A
- Authority
- CN
- China
- Prior art keywords
- volume data
- behalf
- solid
- polygonal slices
- polygonal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Generation (AREA)
Abstract
Three-dimensional medical image display method based on GPU quickens belongs to technical field of medical image processing.At first medical science DICOM image sequence file is saved in Installed System Memory in the mode of volume data; Utilize the 3 d graphic library programming expansion interface function API of OpenGL or DirectX then, volume data is loaded into the GPU video memory; Calculate again to generate and act on behalf of solid, and the polygonal slices of acting on behalf of in the solid is carried out illumination calculation and color calculation by pixel; Mix by Alpha at last and will act on behalf of the synthetic 3 d medical images of all polygonal slices in the solid.The present invention is than existing medical image display method based on CPU, and the present invention has very high arithmetic speed, can realize that real-time interactive shows on the popular PC of ordinary consumption levels; And need not use graphics workstation, make cost reduce greatly.
Description
Technical field
The invention belongs to technical field of medical image processing, mainly utilize the powerful parallel flow processing power of GPU (display processing unit), come volume drawing is quickened, compared to the Volume Rendering Techniques of traditional C PU, this method makes volume drawing interactively to show in real time.
Technical background
The booster action of medical image in diagnosis is more and more obvious, but only can observe human body from the two-dimensional section direction.In order to improve the accuracy and the science of medical diagnosis and treatment planning, need change 3-D view into by the two-dimensional ct image sequence with stereoeffect directly perceived.Yet present three-dimensional medical assistant diagnosis system all needs other operation platform of workstation level, could satisfy the requirement of real-time substantially, because price problem makes that this system is difficult to promote.The scientists of computer graphic image proposes tasks such as large-scale data processing, computing are placed on the research direction of GPU operation as the forward position in the world at present, have the incomparable speed advantage of CPU, on the GPU of consumer level hardware, realize the Interactive Visualization of large-scale data.
Current general Volume Rendering Techniques generally is based on traditional CPU and calculates, and for the popular PC of traditional consumption rank, is difficult to carry out in real time interactive mode and shows.If the interactive computing of demonstration in real time generally will move on the parallel graphic workstation, cost is compared the adult and is improved.
Summary of the invention
The present invention is directed to existing medical image display method based on CPU exist arithmetic speed slow, can't the interactive drawback of carrying out, a kind of three-dimensional medical image display method that quickens based on GPU is provided, utilize the powerful stream arithmetic capability of GPU to quicken, thereby realize the effect of interactive volume drawing.
Technical solution of the present invention is as follows:
Three-dimensional medical image display method based on GPU quickens as shown in Figure 1, may further comprise the steps:
Step 1: read in medical science DICOM image sequence file and be saved in Installed System Memory in the mode of volume data.
Volume data is to be made of sequential 2 dimension medical science DICOM image sequences.The photographed image-related information of these medical images is read in Installed System Memory (image resolution ratio, interlamellar spacing, image pixel information).For better visual effect, volume data should at first be carried out pre-service, makes an uproar such as removing with image filter, and obtains more careful effect by the interpolated layer spacing data.
Step 2: utilize the 3 d graphic library programming expansion interface function API of OpenGL or DirectX, the volume data that step 1 is kept in the Installed System Memory is loaded into the GPU video memory, becomes the 3D texture that GPU can visit.The physical interface function API can adopt the spread function glTexImage3DEXT in OpenGL storehouse that volume data is loaded as the 3D texture.
Step 3: calculate to generate and act on behalf of solid, specifically be divided into following step:
Step 3-1: the coordinate of summit under eye coordinates system that calculates volume data envelope box.
This operation relates to an apex coordinate map function, and its essence is exactly that the apex coordinate of volume data envelope box is transformed into eye coordinates system by the local coordinate system of volume data.At first the local coordinate with eight summits of volume data envelope box converts world coordinates to, then by viewpoint world coordinates parameter matrix, with the world coordinate transformation on eight summits of volume data envelope box in eye coordinates system.
Step 3-2: calculate and act on behalf of the polygonal slices number that solid comprises.
Under eye coordinates system, at first calculate the maximal value z of eight summits of volume data envelope box at direction of visual lines
MaxWith minimum value z
MinDetermine sampling rate Δ h, i.e. distance between the adjacent two layers polygonal slices then; Calculate at last and act on behalf of the polygonal slices number N that solid comprises
ShcesFor:
Step 3-3: the world coordinates that calculates the summit of acting on behalf of every polygonal slices of solid.
For every polygonal slices acting on behalf of in the solid, by calculating every plane, polygonal slices place and the intersection point of volume data envelope box under world coordinate system, then with the world coordinates of these intersection points and 3D texture coordinate sequentially (such as along clockwise direction or counterclockwise) form the summit array of every polygonal slices.Wherein, the concrete grammar of calculating intersection point is:
If volume data data interval under local coordinate system is [x
Min, x
Min], [y
Min, y
Min], [z
Min, z
Min], viewpoint position is
The direction of visual lines vector is
Sampling rate is Δ h.For every polygonal slices, promptly perpendicular to the intersection area of direction of visual lines, the parallel plane that is spaced apart Δ h and the cutting of cube boundary sections.Plane, a polygonal slices place can be expressed as equation
According to direction of visual lines and put p in the volume data of process
0The position, order computation volume data boundary sections and plane
Intersection point.Because we overlap volume data local coordinate system and world coordinate system, greatly make things convenient for our calculating, because boundary sections is all parallel with coordinate axis.
Step 4: to every polygonal slices of step 3 gained, carry out illumination calculation and color calculation, specifically be divided into following step by pixel:
Step 4-1: the volume data gradient of calculating each pixel of every polygonal slices in agency's geometry.
(gradient z) is expressed as by a partial differential gradient operator three-dimensional data f for x, y
The concrete gradient calculation method that adopts based on 6 neighborhoods sampling difference, calculating is positioned at (x, y, the gradient of z) locating
f(x
i,y
j+1,z
k)-f(x
i,y
j-1,z
k),f(x
i,y
j,z
k+1)-f(x
i,y
j,z
k-1))
Step 4-2: adopt the color transport function to calculate the color of each pixel of every polygonal slices in agency's geometry.
Because the volume data image is a gray level image, visually effect and chroma resolution are unfavorable for details is differentiated, so carry out the mapping of volume data voxel gray scale to the RGB color space by the color transport function in the method.
The color transport function is that (v is the gray-scale value of volume data to a similar c=f for one dimension function v), the color value of c for exporting.The effect of color transport function is to be used for the volume data of gray-scale value is mapped to the volume data of RGB color space.Can realize by a color map table,, can deposit this color map table by 1 dimension texture in the GPU the inside.In medical image sequence the inside, gradation data is from 0 to N discrete depositing.Here, N is the natural number that is decided by color storage bit number i a: N=2 '.By estimating to calculate 1 dimension texture of Function Mapping table,, be easy to obtain exporting texture like this by this texture is sampled.This section code is a simple instruction texld (float x) in GPU cg language the inside.
Step 4-3: the rasterized pixel colouring value that calculates each pixel of acting on behalf of every polygonal slices in the solid according to the Phong illumination model.
Reflected light is divided into three parts, and surround lighting diffuses, specular light.
Surround lighting essence is meant light source through effects such as environment, and indirectly to the influence of object, its essence is repeatedly reflection between object and environment, a kind of light intensity when finally reaching balance then.Say that in essence this light intensity each point on object is inequality, but think surround lighting under the same environment approx in simple illumination model, its light distribution is uniformly, and the light intensity intensity of surround lighting on any one direction is all identical.Showing is a constant.
Diffuse: diffusing is defined as follows, suppose when light source during from a direction, diffuse light evenly to all directions reflections and spread, irrelevant with catoptrical intensity and viewpoint, essence is caused by little surface irregularity of reflecting surface because diffuse, thereby the space distribution that diffuses is uniform.The formula of computing is as follows; Suppose that the incident light intensity is I
p, the normal direction of some P is N on the body surface infinitesimal, and a vector of unit length that points to the light source of incident light from some P is L, and angle between the two is θ, can calculate the diffuse reflection light intensity by the cosine law of light intensity energy to be: I
d=I
p* K
d* cos (θ), θ ∈ (0, pi/2) in last formula, K
dBe the diffuse-reflection factor of the light relevant with object, coefficient satisfies 0<K
d<1.When L, N were vector of unit length, also the form of available following vector operation was expressed: I
d=I
p* K
d* (LN) having under the situation of a plurality of light sources, following expression can be arranged:
Specular light: most typical example is exactly for ideal mirror, and the reflected light of minute surface concentrates on a direction, and observes reflection law.It is promoted, and to general smooth surface, reflected light concentrates in the spatial interval scope, and by the reflection direction light intensity maximum of reflection law decision.Therefore, for same point, be different from the viewed direct reflection light intensity of diverse location, this is relevant with the direct reflection light direction with sight line.The formula of direct reflection light intensity can be expressed as: I
d=I
p* K
s* cos
n(θ), K in θ ∈ (0, the pi/2) tool
sBe the specularity factor relevant with the body surface optical property, θ is the angle of direction of visual lines V and reflection direction R, and n is a reflection index, its essence has come down to reflect the gloss intensity of body surface, be generally 1~2000, by formula as can be seen, the big more body surface of number is smooth more.Specular light will form very bright hot spot near reflection direction, be called high optical phenomenon.Equally, if V and R are formatted as vector of unit length, the direct reflection light intensity can be expressed as so: I
s=I
pK
s(RV)
nWherein, R can be calculated by the sub-R=2N of vectorial reflectometer formula (NL)-L.Diffuse identically with the front, when the situation for a plurality of light sources, the direct reflection light intensity can be expressed as
Step 5: will act on behalf of the synthetic 3 d medical images of all polygonal slices in the solid by the Alpha mixing.
Closing under the situation of depth test, opening Alpha and mix, just acting on behalf of the synthetic 3 d medical images of all polygonal slices in the solid.In order to improve the speed of playing up, can be saved in display list with acting on behalf of solid, so just can play up a plurality of summits with single instruction, draw efficient thereby improve.
It is a method that the color that will act on behalf of the some polygon layers of solid is mixed that Alpha mixes.The illumination model that figure GPU hardware is used is at polygon vertex, and can not be used in above the volume drawing, so must close illumination.And,, just must play up each polygon layer of acting on behalf of the solid the inside, so must close depth test owing to opened the Alpha mixing.Carry out simplicity of explanation below.
Alpha mixes: the most special efficacys among the OpenGL are all mixed relevant with (generally being color) of some type.Colour mixture is defined as, with the color of certain pixel be plotted in pixel color corresponding with it on the screen and mutually combine.Then depend on the component value of the alpha passage of color as for how in conjunction with these two colors, and/or employed CMF.The 4th color-set that Alpha normally is positioned at the color value end becomes component.Most applications thinks that all the Alpha component represents the transparency of material.In other words, the alpha value is that the material of 0.0 o'clock representative is fully transparent.The alpha value is that the material of 1.0 o'clock representatives then is opaque fully.The ultimate principle of Hun Heing is that the color of each pixel of image that just will color separation and background color are all according to RGB rule after separating separately in fact, after mixing according to RGB color component * (1-alpha value)-such simple formula of the RGB color component * alpha value+background of an image, the RGB component that at last mixing is obtained reconsolidates, and formula is as follows: Color
Final=Color
Front* α+Color
Back* (1-α).OpenGL calculates the colour mixture result of these two pixels according to top formula.This formula generally can generate the effect of the transparent/translucent of mixing.
Depth test: the pixel fragment test is exactly to test each pixel in fact, and having only just can be drawn by the pixel of test, does not then draw by the pixel of test.OpenGL provides multiple test operation, utilizes these operations can realize the effect that some are special.The notion of " depth test " is particularly useful in the drawing three-dimensional scene.When not using depth test, if we draw the object of a close together earlier, draw distance object far away again, then the object of distance can override the near object of distance because draw the back, and such effect is not that we are desirable.If used depth test, then situation will be different: whenever a pixel is drawn, OpenGL just writes down " degree of depth " of this pixel, and (degree of depth can be understood as: this pixel distance observer's distance.Depth value is big more, and expression distance is far away more), if when having new pixel to be about to cover original pixel, depth test can check whether the new degree of depth can be littler than original depth value.If, then cover pixel, draw successfully; If not, then can not cover original pixel, drafting is cancelled.So, even we draw closer object earlier, draw distant object again, object then far away can not cover near object yet.In fact, as long as there is depth buffer, no matter whether enable depth test, OpenGL can attempt depth data is written in the buffer zone when pixel is drawn, forbids writing unless called glDepthMask (GL_FALSE).These depth datas can also have some interesting purposes except being used for conventional test, such as drawing shade or the like.Except depth test, OpenGL also provides and has cut out test, Alpha and template test.
The invention provides a kind of medical image display method that quickens based on GPU, than existing medical image display method based on CPU, the present invention has very high arithmetic speed, can realize that real-time interactive shows on the popular PC of ordinary consumption levels; And need not use graphics workstation, make cost reduce greatly.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention.
Embodiment
The present invention adopts technique scheme, shows checking at MRI and CT medical science DICOM image sequence respectively, all has good display.The MRI image need to prove, owing to need remove the pre-service of making an uproar because noise is bigger; And the CT image is comparatively clear, need not to remove the pre-service of making an uproar, and can be directly shows in the mode of volume data.
Claims (4)
1, the three-dimensional medical image display method that quickens based on GPU may further comprise the steps:
Step 1: read in medical science DICOM image sequence file and be saved in Installed System Memory in the mode of volume data;
Step 2: utilize the 3 d graphic library programming expansion interface function API of OpenGL or DirectX, the volume data that step 1 is kept in the Installed System Memory is loaded into the GPU video memory, becomes the 3D texture that GPU can visit;
Step 3: calculate to generate and act on behalf of solid, specifically be divided into following step:
Step 3-1: the coordinate of summit under eye coordinates system that calculates volume data envelope box;
At first the local coordinate with eight summits of volume data envelope box converts world coordinates to, then by viewpoint world coordinates parameter matrix, with the world coordinate transformation on eight summits of volume data envelope box in eye coordinates system;
Step 3-2: calculate and act on behalf of the polygonal slices number that solid comprises;
Under eye coordinates system, at first calculate the maximal value z of eight summits of volume data envelope box at direction of visual lines
MaxWith minimum value z
MinDetermine sampling rate Δ h, i.e. distance between the adjacent two layers polygonal slices then; Calculate at last and act on behalf of the polygonal slices number N that solid comprises
ShcesFor:
Step 3-3: the world coordinates that calculates the summit of acting on behalf of every polygonal slices of solid;
For every polygonal slices acting on behalf of in the solid, by calculating every plane, polygonal slices place and the intersection point of volume data envelope box under world coordinate system, the summit array that the world coordinates and the 3D texture coordinate of these intersection points are sequentially formed every polygonal slices then;
Step 4: to every polygonal slices of step 3 gained, carry out illumination calculation and color calculation, specifically be divided into following step by pixel:
Step 4-1: the volume data gradient of calculating each pixel of every polygonal slices in agency's geometry;
(gradient z) is expressed as by a partial differential gradient operator three-dimensional data f for x, y
The concrete gradient calculation method that adopts based on 6 neighborhoods sampling difference, calculating is positioned at (x, y, the gradient of z) locating
Step 4-2: adopt the color transport function to calculate the color of each pixel of every polygonal slices in agency's geometry;
Wherein, described color transport function is that (v is the gray-scale value of volume data to a similar c=f for one dimension function v), the color value of c for exporting; The effect of color transport function is to be used for the volume data of gray-scale value is mapped to the volume data of RGB color space;
Step 4-3: the rasterized pixel colouring value that calculates each pixel of acting on behalf of every polygonal slices in the solid according to the Phong illumination model;
In the described Phong illumination model: reflected light is divided into three parts, and surround lighting diffuses, specular light;
Described surround lighting adopts the surround lighting under the same environment, and its light distribution is uniformly, and the light intensity intensity of surround lighting on any one direction is all identical;
The described light intensity that diffuses is: I
d=I
p* K
d* cos (θ), θ ∈ (0, pi/2); I wherein
pBe the incident light intensity; K
dBe the diffuse-reflection factor of the light relevant, and satisfy 0<K with object
d<1; θ is the normal direction N of some P on the body surface infinitesimal and the angle of the vector of unit length L of the light source of some P sensing incident light;
The light intensity of described specular light is: I
d=I
p* K
s* cos
n(θ '), θ ' ∈ (0, pi/2); I wherein
pBe the incident light intensity; K
sBe the specularity factor relevant with the body surface optical property, θ ' is the angle of direction of visual lines V and reflection direction R, and n is a reflection index;
Step 5: will act on behalf of the synthetic 3 d medical images of all polygonal slices in the solid by the Alpha mixing;
Closing under the situation of depth test, opening Alpha and mix, just acting on behalf of the synthetic 3 d medical images of all polygonal slices in the solid.In order to improve the speed of playing up, can be saved in display list with acting on behalf of solid, so just can play up a plurality of summits with single instruction, draw efficient thereby improve.
2, the three-dimensional medical image display method that quickens based on GPU according to claim 1, it is characterized in that, when step 1 is read in medical science DICOM image sequence file and is saved in Installed System Memory in the mode of volume data, for better visual effect, earlier to volume data should remove make an uproar or interpolated layer between the pre-service of data.
3, the three-dimensional medical image display method that quickens based on GPU according to claim 1 is characterized in that, the physical interface function API adopts the spread function glTexImage3DEXT in OpenGL storehouse that volume data is loaded as the 3D texture in the step 2.
4, the three-dimensional medical image display method that quickens based on GPU according to claim 1 is characterized in that the concrete computing method of intersection point are among the step 3-3:
If volume data data interval under local coordinate system is [x
Min, x
Min], [y
Min, y
Min], [z
Min, z
Min], viewpoint position is
The direction of visual lines vector is
Sampling rate is Δ h; Plane, a polygonal slices place can be expressed as equation
According to direction of visual lines and put p in the volume data of process
0The position, order computation volume data boundary sections and plane
Intersection point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA200910059864XA CN101593345A (en) | 2009-07-01 | 2009-07-01 | Three-dimensional medical image display method based on the GPU acceleration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA200910059864XA CN101593345A (en) | 2009-07-01 | 2009-07-01 | Three-dimensional medical image display method based on the GPU acceleration |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101593345A true CN101593345A (en) | 2009-12-02 |
Family
ID=41407987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA200910059864XA Pending CN101593345A (en) | 2009-07-01 | 2009-07-01 | Three-dimensional medical image display method based on the GPU acceleration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101593345A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102074036A (en) * | 2010-12-07 | 2011-05-25 | 中国地质大学(武汉) | Graphics processing unit (GPU) based accelerated dynamic sectioning method of volume data |
CN102096935A (en) * | 2011-03-17 | 2011-06-15 | 长沙景嘉微电子有限公司 | Blocking-rendering based generation of anti-aliasing line segment in GPU |
CN102509341A (en) * | 2011-10-17 | 2012-06-20 | 中国科学院自动化研究所 | Method for intersecting light and voxel |
CN102521879A (en) * | 2012-01-06 | 2012-06-27 | 肖华 | 2D (two-dimensional) to 3D (three-dimensional) method |
CN102542609A (en) * | 2011-12-16 | 2012-07-04 | 大连兆阳软件科技有限公司 | Ground surface modifier drawing optimizing method |
CN102637303A (en) * | 2012-04-26 | 2012-08-15 | 珠海医凯电子科技有限公司 | Ultrasonic three-dimensional mixed and superposed volumetric rendering processing method based on GPU (Graphic Processing Unit) |
CN103021017A (en) * | 2012-12-04 | 2013-04-03 | 上海交通大学 | Three-dimensional scene rebuilding method based on GPU acceleration |
CN104134230A (en) * | 2014-01-22 | 2014-11-05 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device and computer equipment |
CN104166958A (en) * | 2014-07-11 | 2014-11-26 | 上海联影医疗科技有限公司 | Area-of-interest displaying and operating method |
CN104199094A (en) * | 2014-09-03 | 2014-12-10 | 电子科技大学 | Light-shade processing method for seismic data |
CN104346823A (en) * | 2013-07-30 | 2015-02-11 | 南京普爱射线影像设备有限公司 | CUDA-based dental three-dimensional CT image processing method |
CN104463941A (en) * | 2014-12-05 | 2015-03-25 | 上海联影医疗科技有限公司 | Volume rendering method and device |
CN108597012A (en) * | 2018-04-16 | 2018-09-28 | 北京工业大学 | A kind of three-dimensional rebuilding method of the medical image based on CUDA |
CN109493414A (en) * | 2018-10-30 | 2019-03-19 | 西北工业大学 | A kind of Blinn-Phong illumination enhancing algorithm adaptive based on gradient |
CN109634611A (en) * | 2019-01-03 | 2019-04-16 | 华南理工大学 | Mobile terminal threedimensional model ply document analysis and methods of exhibiting based on OpenGL |
CN110458950A (en) * | 2019-08-14 | 2019-11-15 | 首都医科大学附属北京天坛医院 | A kind of method for reconstructing three-dimensional model, mobile terminal, storage medium and electronic equipment |
CN110458949A (en) * | 2019-08-14 | 2019-11-15 | 首都医科大学附属北京天坛医院 | Method for reconstructing, mobile terminal and the electronic equipment of the two-dimentional tangent plane of threedimensional model |
US10692272B2 (en) | 2014-07-11 | 2020-06-23 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for removing voxel image data from being rendered according to a cutting region |
CN111721216A (en) * | 2020-06-29 | 2020-09-29 | 河南科技大学 | Steel wire rope detection device based on three-dimensional image, surface damage detection method and rope diameter calculation method |
US11263749B1 (en) | 2021-06-04 | 2022-03-01 | In-Med Prognostics Inc. | Predictive prognosis based on multimodal analysis |
US11403809B2 (en) | 2014-07-11 | 2022-08-02 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for image rendering |
-
2009
- 2009-07-01 CN CNA200910059864XA patent/CN101593345A/en active Pending
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102074036A (en) * | 2010-12-07 | 2011-05-25 | 中国地质大学(武汉) | Graphics processing unit (GPU) based accelerated dynamic sectioning method of volume data |
CN102074036B (en) * | 2010-12-07 | 2013-01-09 | 中国地质大学(武汉) | Graphics processing unit (GPU) based accelerated dynamic sectioning method of volume data |
CN102096935A (en) * | 2011-03-17 | 2011-06-15 | 长沙景嘉微电子有限公司 | Blocking-rendering based generation of anti-aliasing line segment in GPU |
CN102096935B (en) * | 2011-03-17 | 2012-10-03 | 长沙景嘉微电子股份有限公司 | Blocking-rendering based generation of anti-aliasing line segment in GPU |
CN102509341A (en) * | 2011-10-17 | 2012-06-20 | 中国科学院自动化研究所 | Method for intersecting light and voxel |
CN102509341B (en) * | 2011-10-17 | 2014-06-25 | 中国科学院自动化研究所 | Method for intersecting light and voxel |
CN102542609A (en) * | 2011-12-16 | 2012-07-04 | 大连兆阳软件科技有限公司 | Ground surface modifier drawing optimizing method |
CN102521879A (en) * | 2012-01-06 | 2012-06-27 | 肖华 | 2D (two-dimensional) to 3D (three-dimensional) method |
CN102637303A (en) * | 2012-04-26 | 2012-08-15 | 珠海医凯电子科技有限公司 | Ultrasonic three-dimensional mixed and superposed volumetric rendering processing method based on GPU (Graphic Processing Unit) |
CN102637303B (en) * | 2012-04-26 | 2014-05-28 | 珠海医凯电子科技有限公司 | Ultrasonic three-dimensional mixed and superposed volumetric rendering processing method based on GPU (Graphic Processing Unit) |
CN103021017A (en) * | 2012-12-04 | 2013-04-03 | 上海交通大学 | Three-dimensional scene rebuilding method based on GPU acceleration |
CN103021017B (en) * | 2012-12-04 | 2015-05-20 | 上海交通大学 | Three-dimensional scene rebuilding method based on GPU acceleration |
CN104346823A (en) * | 2013-07-30 | 2015-02-11 | 南京普爱射线影像设备有限公司 | CUDA-based dental three-dimensional CT image processing method |
CN104134230A (en) * | 2014-01-22 | 2014-11-05 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device and computer equipment |
CN104134230B (en) * | 2014-01-22 | 2015-10-28 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and computer equipment |
WO2015110012A1 (en) * | 2014-01-22 | 2015-07-30 | Tencent Technology (Shenzhen) Company Limited | Image processing method and apparatus, and computer device |
US10692272B2 (en) | 2014-07-11 | 2020-06-23 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for removing voxel image data from being rendered according to a cutting region |
CN104166958A (en) * | 2014-07-11 | 2014-11-26 | 上海联影医疗科技有限公司 | Area-of-interest displaying and operating method |
US11403809B2 (en) | 2014-07-11 | 2022-08-02 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for image rendering |
CN104199094A (en) * | 2014-09-03 | 2014-12-10 | 电子科技大学 | Light-shade processing method for seismic data |
CN104463941B (en) * | 2014-12-05 | 2019-05-31 | 上海联影医疗科技有限公司 | Object plotting method and device |
CN104463941A (en) * | 2014-12-05 | 2015-03-25 | 上海联影医疗科技有限公司 | Volume rendering method and device |
CN108597012A (en) * | 2018-04-16 | 2018-09-28 | 北京工业大学 | A kind of three-dimensional rebuilding method of the medical image based on CUDA |
CN109493414A (en) * | 2018-10-30 | 2019-03-19 | 西北工业大学 | A kind of Blinn-Phong illumination enhancing algorithm adaptive based on gradient |
CN109634611B (en) * | 2019-01-03 | 2021-08-10 | 华南理工大学 | Mobile terminal three-dimensional model ply file analysis and display method based on OpenGL |
CN109634611A (en) * | 2019-01-03 | 2019-04-16 | 华南理工大学 | Mobile terminal threedimensional model ply document analysis and methods of exhibiting based on OpenGL |
CN110458950A (en) * | 2019-08-14 | 2019-11-15 | 首都医科大学附属北京天坛医院 | A kind of method for reconstructing three-dimensional model, mobile terminal, storage medium and electronic equipment |
CN110458949A (en) * | 2019-08-14 | 2019-11-15 | 首都医科大学附属北京天坛医院 | Method for reconstructing, mobile terminal and the electronic equipment of the two-dimentional tangent plane of threedimensional model |
CN111721216A (en) * | 2020-06-29 | 2020-09-29 | 河南科技大学 | Steel wire rope detection device based on three-dimensional image, surface damage detection method and rope diameter calculation method |
US11263749B1 (en) | 2021-06-04 | 2022-03-01 | In-Med Prognostics Inc. | Predictive prognosis based on multimodal analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101593345A (en) | Three-dimensional medical image display method based on the GPU acceleration | |
CN102915559B (en) | Real-time transparent object GPU (graphic processing unit) parallel generating method based on three-dimensional point cloud | |
JP4637837B2 (en) | Multiple attribute real-time simultaneous rendering method, system and program | |
CN101882323B (en) | Microstructure surface global illumination real-time rendering method based on height map | |
US20110069070A1 (en) | Efficient visualization of object properties using volume rendering | |
US6573893B1 (en) | Voxel transfer circuit for accelerated volume rendering of a graphics image | |
CN104167011B (en) | Micro-structure surface global lighting drawing method based on direction light radiation intensity | |
Sabino et al. | A hybrid GPU rasterized and ray traced rendering pipeline for real time rendering of per pixel effects | |
CN107784622A (en) | Graphic system and graphics processor | |
CN101441774A (en) | Dynamic scene real time double face refraction drafting method based on image mapping space | |
CN103617594B (en) | Noise isopleth-surface drawing-oriented multi-GPU (Graphics Processing Unit) rendering parallel-processing device and method thereof | |
Nah et al. | MobiRT: an implementation of OpenGL ES-based CPU-GPU hybrid ray tracer for mobile devices | |
Kaufman et al. | A survey of architectures for volume rendering | |
CN103745495A (en) | Medical volume data based volume rendering method | |
CN107633548B (en) | The method and device of figure rendering is realized in a computer | |
US20180005432A1 (en) | Shading Using Multiple Texture Maps | |
JP4847910B2 (en) | Curvature-based rendering method and apparatus for translucent material such as human skin | |
CN105761300B (en) | The processing method of process Shader anti-aliasings based on pre-sampling | |
CN104599311A (en) | GPU (Graphics Processing Unit)-based hybrid visual system of three-dimensional medical image | |
Wang et al. | Implementation of shading techniques based on OpenGL | |
Novello et al. | Riemannian Ray Tracing | |
Torres | Physically Based Real-Time Raytracing | |
Orhun | Interactive volume rendering for medical images | |
Rendering | Physically-Based Rendering | |
Yalım | Acceleration of direct volume rendering with texture slabs on programmable graphics hardware |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20091202 |