CN1403057A - 3D Euclidean distance transformation process for soft tissue display in CT image - Google Patents

3D Euclidean distance transformation process for soft tissue display in CT image Download PDF

Info

Publication number
CN1403057A
CN1403057A CN01142133A CN01142133A CN1403057A CN 1403057 A CN1403057 A CN 1403057A CN 01142133 A CN01142133 A CN 01142133A CN 01142133 A CN01142133 A CN 01142133A CN 1403057 A CN1403057 A CN 1403057A
Authority
CN
China
Prior art keywords
image
dimensional
nearest
stain
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN01142133A
Other languages
Chinese (zh)
Inventor
田捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN01142133A priority Critical patent/CN1403057A/en
Publication of CN1403057A publication Critical patent/CN1403057A/en
Pending legal-status Critical Current

Links

Images

Abstract

The 3D Euclidean disease transformation process for soft tissue display in CT image incldues the extraction step of extracting contour of some object from medical image; the fast 3D Euclidean disease transformation step to create distance image; the processing step of the 3D data to create 3D data meeting certain depth and grey requirement; and the display step to classify the created 3D data and volume plotting. The present invention can recognize the outer skin contour acutely, position soft tissue precisely and reproduced the spatial relation of blood vessels, muscles and bones clearly, so that it has important application in medicine. The present invention has operation speed fast enough to real-time interaction.

Description

Utilize three-dimensional Euclidean distance conversion to realize the method that soft tissue shows in the CT image
Technical field
The present invention relates to the pattern recognition and the information processing technology, particularly utilize three-dimensional Euclidean distance conversion to realize soft tissue display packing in the medical image.
Prior art
CT (Computerized Tomography) is the abbreviation of computed tomography.Tomoscan is meant the data that obtain the cross section of relevant object by the diverse location of a certain measurement means in the outside of object, and then synthesizes the faultage image of object by these data.CT under the ordinary meaning refers to X ray CT, because the penetration capacity preferably and the contrast mechanism of X ray make it in the medical science imaging, industrial noinvasive detects, and fields such as airport safety check are widely used.Also have in addition, single photon emission computerized tomography,SPECT SPECT (Single Photon Emission ComputerizedTomography), positron emission computerized tomography PET (Positron Emission Tomography) or the like belongs to the category of CT.
This century the seventies, the first in the world platform X ray CT experimental prototype is born in Britain, it is manufactured and designed by engineer Housefield.In the last thirty years, the CT technology reaches its maturity, developed so far five generation product, especially spiral CT birth, not only reduced sweep time, picture quality is significantly improved.New era in history of no wound diagnosis has been started in its application at medical domain especially, has obtained great success.Yet existing image technology normally obtains the image data of a certain tomography of human body, and the doctor diagnoses by observing film or screen then.But no matter be film or screen display, the viewed two dimensional image that remains of medical worker, and owing to the limitation of imaging device, the decay and the multifactor pseudo-shadow and the noise of causing of local bulk effect of signal source generate, influenced picture quality, so the accuracy of diagnostic result depends on doctor's clinical experience and Professional knowledge to a great extent.According to domestic one tame authoritative media report, the misdiagnosis rate of domestic some hospitals at present is unexpectedly up to 70%, and this situation makes people worried, and one of solution is exactly by medical imaging Flame Image Process and analytical system.The medical imaging Flame Image Process also is a new branch of science of rising in recent years with analysis, and is nowadays in the ascendant.
Along with developing rapidly and graph image technology increasingly mature of computer and correlation technique thereof, by figure, the powerful measure of image technique, medical imaging picture quality and display packing can be improved greatly, thereby make diagnostic level to be improved greatly by means of Flame Image Process and analysis means, this not only can utilize existing medical imaging device greatly to improve the medical clinic applications level, and can be the medical science training, medical research and teaching, area of computer aided clinical surgery operation etc. provides electronics to realize means, for the research and development of medical science provides solid foundation, has incalculable value.
The three-dimensional visualization of medical image is that the core of this subject is formed, and it utilizes a series of two-dimensional slice image to carry out reconstructing three-dimensional model and demonstration, is the prerequisite of carrying out quantitative analysis.Two kinds of rendering techniques during realizing, three-dimensional visualization are arranged: surface rendering and direct volume drawing.The characteristics of iso-surface patch maximum are to need to advance 2-dimension data field earlier that objects is cut apart and three-dimensional reconstruction, and the representation of a surface of formation object edge equivalence adopts the illumination model drawing image again.And volume drawing is to regard the voxel in the three-dimensional data as vitrina one by one, and classifies and give its certain color and opacity, passes whole data fields by light, carries out color and synthesizes, and obtains final drawing result.
CT has obtained the effect that is better than other any equipment in the demonstration of skeleton; Yet be subjected to that some are technical, the restriction on the principle, present CT is nothing like MRI to the ability that soft tissue shows.As everyone knows, the MRI apparatus expensive, and many inspection contraindications are arranged.Just might substitute MRI if can strengthen the demonstration of soft tissue.The patient can both make CT examination, made MRI again and checked.The doctor also can not only see skeleton information but also sees soft tissue information from the data of once checking.This will be the work that part has the clinical practice meaning very much.
We know that skin is on the surface of human body, and it is made of epidermis, corium and subcutaneous tissue three parts.Its mesocuticle is to be grown by substrate and formed by shape difference, squamous cell not of uniform size.Corium contains abundant blood vessel and teleneuron under epidermis.Subcutaneous tissue is positioned at the corium bottom.Constituted by connective fiber bundle and significant quantities of fat cell,, included blood vessel, lymph, nerve etc. in the fibre bundle so be called subcutaneous layer of fat again.Because the density difference of the density of skin, fatty tissue and the soft tissue of muscle is less, various soft tissue gray scale difference are little in medicine CT image, therefore adopt traditional dividing method such as the threshold method different tissues such as muscle, neural blood vessel that are not easily distinguishable out, be difficult for reconstructing the contour surface of these organization edge.Therefore, the method for volume drawing is adopted in the demonstration of soft tissue usually in the CT image.But inner muscle, neural blood vessel are subjected to stopping of epidermis and subcutaneous fat in original volume data, also can't clearly show.Carry out soft tissue and show, just need on the initial data basis, remove the surface skin of certain depth and the voxel of subcutaneus adipose tissue correspondence, demonstrate soft tissues such as hypodermic blood vessel and muscle thereby in the new data field, carry out volume drawing again.
Summary of the invention
The soft tissue display packing that the purpose of this invention is to provide a kind of practicality can reach the accurate identification of skin outline and the accurate location of the soft tissue degree of depth, can clearly reproduce the space anatomy relationship of veins beneath the skin, muscle and skeleton.
For achieving the above object, the method for utilizing three-dimensional Euclidean distance conversion to realize that soft tissue shows in the CT image comprises:
(1) extraction step extracts contour of object from medical image;
(2) variable in distance step uses three-dimensional Euclidean distance Fast transforms to generate range image;
(3) treatment step (peeling processing) is handled the new three-dimensional data that certain depth and gray scale are satisfied in generation to three-dimensional data;
(4) step display is classified and volume drawing to newly-generated three-dimensional data.
The three-dimensional Euclidean distance conversion of utilization of the present invention realizes soft tissue display packing in the CT image, can reach the accurate identification of skin outline and the accurate location of the soft tissue degree of depth, can clearly reproduce the space anatomy relationship of veins beneath the skin, muscle and skeleton, have important use at medical domain and be worth.And owing to adopted the European mapping algorithm of self-designed three-dimensional and based on the 3 d data field multilist plane display method of volume drawing, fast operation can satisfy the requirement of doctor's real-time, interactive.Therefore, the method has high credibility, applicability and admissibility.
Description of drawings
Fig. 1 utilizes range conversion to realize the formation of soft tissue display packing;
Fig. 2 is the proof sketch map of the used proposition of three-dimensional Euclidean distance mapping algorithm;
In Fig. 2, A, B, C are 3 points of colleague's same column on the different aspects, and D is the closest approach of ordering to B in the N shell, and M, P layer are respectively the top bottoms, and then by proposition as can be known, some A, C are D at the closest approach of N shell; Be pixel in the M shell (i, j, k) the nearest stain in the N shell two dimensional image, be in N shell with point (i, j, k 1) nearest point.Therefore only need that each layer in n the two dimensional image is done once distance and calculate, and make comparisons and to try to achieve nearest stain.According to proposition also as can be known, if the some closest approach of B in the three-dimensional distance image is D, and 1 E and some A, B, the C same column of going together is arranged in addition between M, N shell, then putting the closest approach of E in the three-dimensional distance image must be between M and N shell.
Fig. 3 is three-dimensional Euclidean distance Fast transforms method sketch map;
Fig. 4 is based on the 3 d data field multilist plane display method sketch map of volume drawing;
Fig. 5 is the shade of gray weighting function;
Wherein, f (gray value of v) representing volume elements, w 1(v) be the intensity-weighted function;
G (Grad of v) representing the volume elements center, w 2(v) be the gradient weighting function.
Fig. 6 is the soft tissue display effect;
Wherein, experimental data is a human body Cranial Computed Tomography image, totally 58 sections, spacing 1.5mm, resolution is 512 * 512 * 16, experimental result is as shown below: an original CT image that Fig. 1) is head, Fig. 2) be Fig. 1) range image that obtains through range conversion, be that the degree of depth is 0~4000 processing occiput CT image for the 9.76mm gray scale Fig. 3), Fig. 4) being the three-dimensional reconstruction figure of original image, is to use Fig. 3 Fig. 5)) new images three-dimensional reconstruction head image, after it has divested surface skin, wherein the muscular tissue of the blood vessel of cervical region and buccal is high-visible, Fig. 6) is to Fig. 1) reconstructed image after The data corrosion expanding method is handled.Can obviously find out by contrast, adopt the image after range conversion is handled to have reconstruction more clearly to reproduce blood vessel and soft tissue.The detailed description of invention
Describe soft tissue display packing of the present invention in detail below in conjunction with accompanying drawing.As a kind of specific implementation, to form by the Flame Image Process and the procedure for displaying of four steps, the structure of this scheme is referring to Fig. 1.These four steps are respectively: extract target outline, three-dimensional Euclidean distance Fast transforms, peeling is handled and show based on the multilist face of volume drawing, be introduced one by one below.
Step 1: extract target outline (cutting apart)
The purpose in this step is to do pretreatment for the range conversion algorithm, and target object is split from background, is also referred to as the process of binaryzation.Because we will carry out the object in the 3 D medical CT image peeling to handle from outside to inside, calculate the minimum distance of the interior each point of object to object external outline, must detect the edge of target earlier.Show for soft tissue, just detect the outline of skin.
In order to shorten operation time, at the gray value and the target object of background in the CT image bigger difference is arranged, we can adopt traditional threshold method and region growing method, can certainly adopt other dividing methods such as active contour, rim detection.
For the region growing method, need the user to select a point on the contoured skin as seed points.The key of threshold method is the selection of threshold value, can be selected to distinguish the gray threshold of background and non-background by the user, and also available automatic threshold method is determined threshold value.Common automatic threshold method has the P-parametric method, state method, differential histogram method, techniques of discriminant analysis and variable thresholding method.At the many characteristics of medical image noise, can adopt techniques of discriminant analysis.Promptly in the rectangular histogram of gradation of image value, try to achieve threshold value t the set of gray value is divided into two groups, make two groups to obtain optimal separation.The standard of optimal separation is that the ratio of two groups the variance of meansigma methods and each prescription difference is for maximum.When this method had two crests in rectangular histogram, the state method of can be used as worked; Even also can obtain threshold value when not having crest.If given image has L level gray value, threshold value is k, and k is divided into two group 1,2 with the pixel of image.The pixel count of group 1 is made as ω 1(k), average gray value is M 1(k), variance is σ 1(k); The pixel count of group 2 is made as ω 2(k), average gray value is M 2(k), variance is σ 2(k).If the average gray value of all pixels is decided to be M τ.Then:
Variance in the group σ w 2 = ω 1 σ 1 2 + ω 2 σ 2 2
Variance between group σ B 2 = ω 1 ( M 1 - Mτ ) 2 + ω 2 ( M 2 - Mτ ) 2 = ω 1 ω 2 ( M 1 - M 2 ) 2
Optimality criterion
Figure A0114213300083
Value is for maximum, promptly Get maximum.
Step 2: three-dimensional Euclidean distance Fast transforms
Because need be to the soft tissue in the medicine CT image from handling outside to inside, promptly to calculate the minimum distance of the interior soft tissue each point of skin to the skin outline, so we are characteristic point (binary value is 0) with the background dot outside the skin outline, remainder is non-characteristic point (binary value is 1).Range conversion has just become to find the solution the processing that each pixel arrives the beeline of 0-pixel in the figure, and from profile (near more by the center) far away more, the pixel characteristic value after the conversion just obtains high more.Calculate by range conversion, we can obtain range image, wherein put the minimum distance of target outline with respect to the eigenvalue of each point in the profile for this, and the value in the background dot respective distances image outside the object external outline are 0.
Distance is a crucial notion in the Flame Image Process.The definition of distance has different definition according to the characteristics of applied environment.Range conversion is the conversion that the two-value file conversion is become range image.It is a mapping, and the value of each pixel in this mapping is the distance that this point arrives its nearest character pixel in object or background.In distance map, the value of pixel is the greatest circle radius that does not intersect with any character pixel.Range conversion is widely used in Flame Image Process and area of pattern recognition such as target refinement, skeleton extract, the interpolation of shape and coupling.
In Practical Calculation, two kinds of distance measures of normal employing: non-Euclidean distance and Euclidean distance, what the former used always is that block, city (City Block), cutting (Chamfer) are equidistant, algorithm adopts serial scan to realize range conversion, in scanning process, transmit beeline information, calculate simply, but what obtain only is a kind of approximation of Euclidean distance.The latter adopts the method for ranks cross processing, dwindles the scope of the nearest stain of search, and algorithm complex reaches O (n 2).Consider that the accurate measurement that medical image is adjusted the distance has higher requirement, we use the Euclidean distance conversion.The algorithm complex of conventional three-dimensional Euclidean distance conversion is O (n 6), arithmetic speed is slow, and practicality is not high.For improving arithmetic speed, reduce operand, we Chen Qi (Chen Qi. the optimal algorithm of Euclidean distance conversion fully. Chinese journal of computers, 1995, (18) 8:611-616) on the two-dimentional Euclidean distance mapping algorithm basis of Ti Chuing, design has realized a kind of new three-dimensional Euclidean distance mapping algorithm.
The basic thought of three-dimensional Euclidean distance mapping algorithm is the two-dimentional bianry image that the three-dimensional bianry image of a n * n * n is resolved into n n * n, and we are called " stain " with the feature pixel, and background pixels is called " white point ".This basic thought comes from 3 d medical images normally to be made up of one group of two-dimensional ct image, strictly speaking inreal at last 3-D view, and regular data field normally.We at first adopt two-dimentional Euclidean distance mapping algorithm that n bianry image carried out two-dimentional Euclidean distance conversion, obtain the nearest stain of each pixel in two dimensional image.Then the stain in the pixel in each two dimensional image and other each layer two dimensional image is carried out distance relatively, and reduce by optimization method and need to participate in the distance two dimensional image number of plies and the number of stain wherein relatively, thereby find each pixel at three-dimensional nearest stain.The eigenvalue of each pixel in the 3-D view is composed to the distance of this pixel to its nearest stain, also just obtained three-dimensional Euclidean distance figure.This algorithm time complexity is O (n 3Logn).
The essential characteristics of this three-dimensional Euclidean distance fast transform algorithm is to have adopted optimization method, reduce to need participates in the distance two dimensional image number of plies and the number of stain wherein relatively, and optimal design below will be to issue a certificate (referring to Fig. 2) based on following three propositions.
Proposition 1: establish (i, j, k 1) and (i, j, k 2) be two pixels that have identical ranks position on two two dimensional images, and known (i, j, k 2) nearest stain in its two dimensional image is (a, b, k 2), then (i, j, k 1) at (i, j, k 2) nearest stain in the two dimensional image of place also is (a, b, k 2).
Proof: to (i, j, k 2) any one pixel (z, w, k in the two dimensional image of place 2), know according to known conditions: (z-i) 2+ (w-j) 2〉=(a-i) 2+ (b-j) 2Then (z-i) 2+ (w-j) 2+ (k 2-k 1) 2〉=(a-i) 2+ (b-j) 2+ (k 2-k 1) 2I.e. (i, j, k 1) at (i, j, k 2) nearest stain in the two dimensional image of place is (a, b, k 2).Card is finished.
By proposition 1 as can be known, (k) the nearest stain in another two dimensional image needn't carry out distance with all stains in this image and calculate and relatively, once just can hit for i, j in order to find a pixel.For finding the nearest stain of a pixel in three dimensions, originally all stains in itself and n the two dimensional image need be done distance calculate with relatively, use proposition 1, only need that then a pixel in each two dimensional image is done once distance and calculate, and make comparisons and to try to achieve nearest stain.Therefore dwindled in three dimensions hunting zone greatly to nearest stain.
For reducing the number of the two dimensional image that participates in distance calculating, compares, we provide proposition 2.
Proposition 2: establish (i, j, k 1) and (i, j, k 2) be two pixels and the k that has identical ranks position on two two dimensional images 1<k 2, if V (i, j, k 1)=(a, b, c), V (i, j, k 2)=(o, p, q), wherein (z) pixel (x, y, z) function of the nearest stain in the two dimensional image, then c≤q are asked in expression to V for x, y.
Proof: according to known conditions v (i, j, k 1)=(a, b, c), V (i, j, k 2)=(o, p, q), then:
(o-i) 2+(p-j) 2+(q-k 1) 2≥(a-i) 2+(b-j) 2+(c-k 1) 2
(a-i) 2+ (b-j) 2+ (c-k 2) 2〉=(o-i) 2+ (p-j) 2+ (q-k 2) 2, two inequality of addition get (q-k 1) 2+ (c-k 2) 2〉=(c-k 1) 2+ (q-k 2) 2, by known conditions k 1<k 2Can get: c≤q.Card is finished.
If known V (i, j, k)=(o, p, q), by proposition 2 as can be known, on other two dimensional image with (i, j, k) have identical ranks position pixel (i, j, l), if l<k, then v (i, j l) can only be on the 1st to q two dimensional image; If l>k, then (i, j l) can only be on q to a n two dimensional image for v.In the same way, if n has the pixel of identical ranks position by d Five equilibrium, then the nearest stain V of these pixels also is divided into d part, and the nearest stain of asking branch pixel such as different d only need be put residing appropriate section to this and go to seek and get final product.
Thus, we can adopt equisection method to dwindle hunting zone to nearest stain.We at first seek their nearest stain in three dimensions to the pixel on n the two dimensional image, according to proposition 1, each pixel all will be done n distance calculating and compare in this step; Seek then
Figure A0114213300102
The nearest stain of the pixel in the individual two dimensional image is actually its processing n two dimensional image is halved, and uses proposition 2, calculates with two dimensional image number relatively and can significantly reduce for the nearest stain of seeking them need participate in distance; Then seek two five equilibrium center image promptly the With
Figure A0114213300112
The nearest stain of pixel in the individual two dimensional image, processing finish n the two dimensional image in back by the quartering; Seek four five equilibrium center image promptly the then With
Figure A0114213300114
The nearest stain of pixel in the individual two dimensional image; The rest may be inferred, seeks to the end
Figure A0114213300115
The nearest stain of the pixel in the individual five equilibrium center image.N two dimensional image carried out 1,2,4,8 ...,
Figure A0114213300116
Five equilibrium carries out log altogether 2Behind n the five equilibrium, each two dimensional image is all once processed, and then the nearest stain of all pixels is found entirely.
In said process, five equilibrium is thin more, and the more little distance that promptly participates in the hunting zone of the nearest stain of pixel is calculated with two dimensional image number relatively few more on the branch center two dimensional images such as searching.
The proposition 3: establish n two dimensional image by d (d=1,2,4,8 ..., ) five equilibrium, be to seek d nearest stain that waits one group of pixel of identical ranks position in the two dimensional image of branch center, need to participate in number of times apart from calculating less than 1.5n.
Proof: n the pixel with identical ranks position is by the d five equilibrium, and according to proposition 2, the nearest stain V of these pixels also is divided into d part, and establishing wherein, k part element number is l k, then &Sigma; k = 1 d l k = n ,, be that the center pixel of k five equilibrium asks nearest stain need use the l of the appropriate section of V because V belongs to two parts simultaneously at adjacent two-part borderline element k+ 1 element carries out distance calculating and comparison.All center pixels of d five equilibrium are asked their nearest stain needs &Sigma; k = 1 d ( l k + 1 ) - 1 = n + d - 1 < 1.5 n Inferior distance is calculated.Card is finished.
Use proposition 3, ask d nearest stain that waits all pixels in the two dimensional image of branch center, need to participate in number of times apart from calculating less than 1.5n 3
Next we provide the three-dimensional arthmetic statement of Euclidean distance conversion fully, referring to Fig. 3:
Step 1: n two-dimentional bianry image is carried out the two-dimensional distance conversion, try to achieve each pixel (i, j, k) the nearest stain in its place two dimensional image, be designated as M (i, j, k);
Step 2: each pixel in n two dimensional image (i, j, n) successively with n-1 some M (i, j, k) (k=1,2 ..., n-1) carry out distance and calculate and comparison, nearest point is its nearest stain in three dimensions;
Step 3: it is 2 that isodisperse variable d initial value is set;
Step 4: n two dimensional image carried out the d five equilibrium, and the planar sequence number q of five equilibrium is followed successively by: 1 d n , 3 d n , . . . , 2 d - 1 d n , and q<n;
Step 5: calculate the nearest stain of pixel in q the two dimensional image successively, q = 1 d n , 3 d n , . . . , 2 d - 1 d n : Each pixel in q two dimensional image (i, j, q) the some M in the stain possible range nearest with it (i, j, k) carry out successively distance calculate with relatively, nearest point is its nearest stain in three dimensions;
Step 6: if , then scanned n two dimensional image, finish; Otherwise halve on last once five equilibrium basis, promptly d=2d returns step 4 and continues computing again.
Because recently stain relatively obtains by distance, so after stain finds recently in three dimensions of each pixel, the distance between them is also tried to achieve simultaneously.
According to the document of Chen Qi, the time complexity of its Euclidean distance mapping algorithm of the two dimensional image of a n * n is O (n 2), then the time complexity in the 1st step is O (n 3).The main calculating of algorithm goes on foot the 5th, and going on foot according to proposition 3, the 5 needs to participate in apart from the number of times that calculates less than 1.5n 3, going on foot as can be known in conjunction with the 3rd, the time complexity of whole algorithm is O (n 3Logn).
Step 3: peeling is handled
We handle the range image that primary medicine CT image obtained in conjunction with second step.At first, the user can pass through the interface alternation formula distance to a declared goal degree of depth, and the different display effects corresponding last apart from the degree of depth is also different, and in addition, the user also specifies the scope of gray scale and gradient by the interface.So-called " peeling " handle to be exactly with original image middle distance eigenvalue in the designated depth scope and gray scale and the point of gradient in specified scope remove (gray value that is about to this point is changed to 0), so just obtained a new 3 d medical images (one group of two dimensional slice data).
Step 4: the 3 d data field multilist face based on volume drawing shows
In order clearly to show sub-dermal soft tissues such as muscle, neural blood vessel, we are on the new graphics image field basis that previous step obtains, adopt a kind of 3 d data field multilist plane display method to carry out three dimensional display based on volume drawing, be convenient to the user and carry out stereovision, and carry out quantitative analysis and deep processing from a plurality of angles.
Traditional direct volume drawing method is regarded the volume elements in the data fields as a vitrina, and gives its certain color and opacity, passes whole data fields by light, carries out color and synthesizes.Three class direct volume drawing methods are arranged at present: ray cast method, projection imaging's method and frequency domain transform method.Ray cast method pixel from the screen emits beam and passes data fields, samples when every light passes data fields and the color accumulation, obtains the color of relevant pixel, but until forming last view.This method image quality is better, but speed is slow; Projection imaging's method projects to the volume elements in the data fields on the screen one by one along certain projecting direction, and each pixel on the screen is accumulated calculating obtaining its color with the influence of the volume elements that obtained, but until forming last view.The imaging speed of this method is fast, but is difficult to carry out illumination calculation, and image quality is relatively poor; The frequency domain transform method utilizes the Fourier conversion that three-dimensional data fields space is converted into three-dimensional frequency domain space, but and by the spatial view of the spatial two dimension slicing acquisition 3 d data field of frequency domain.But the view that this method generated can not reflect the hiding relation when color is synthetic in the spatial domain, make the observer be difficult to judge the context (Wang Wencheng of species distribution, Wu Enhua. be used for the variable template of volume drawing. Chinese journal of computers, in July, 1997,20 (7): 592-599.).
But the direct volume drawing method shows the multiple material in the data fields, discloses their mutual relation in a view.But image is fuzzyyer unavoidably, and because hiding relation is difficult for observed and analysis from viewpoint part far away.
Levoy (Levoy M.Display of surfaces from volume data.IEEEComputer Graphics and Applications in 1988,1988,8 (3): 29-37.) utilize classification function that the volume elements in the 3 d data field is carried out opacity and distribute, and give high opacity highlighting material surface to the volume elements of material edge surface, each opacity is not all contributions to some extent of demonstration to final image of 0 volume elements.(Tang Zesheng such as Tang Zesheng, Yuan Jun. with image space is the Volume Rendering Techniques demonstration 3 d data field of preface. Chinese journal of computers, in November, 1994,17 (11): 801-808.) the method is improved, replace original reconstruct brightness field with the reconstruct 3 d data field, but improved the quality of view and saved memory space, and show many contour surfaces with improved method, but need judge the position of contour surface by continuous sampling, amount of calculation is very big.(Udupa J K, Odhner D. Shell rendering.IEEE Computer Graphics ﹠amp such as Udupa; Application, 1993,13 (6): 58-67.) adopt projection imaging's method to come the drawing three-dimensional data fields and show many contour surfaces, but the problem of photechic effect difference in the projection imaging is not still solved, only replace the normal vector of whole volume elements with the Grad at volume elements center.
In many application, people only are concerned about the boundary member between different material sometimes, and reflect the overall situation of whole data fields by it.At these characteristics, we have proposed a kind of 3 d data field multilist plane display method based on volume drawing.For accelerating display speed, we carry out opacity without classification function to the volume elements in the data fields and distribute (Tang fruit, Zhao Xiaodong, Wang Yuanmei. ambiguity surface structure, opacity mapping and Fast Volume Rendering Algorithm research. electronic letters, vol, in January, 1999,27 (1): 17-21.), we extract the border between different material in the 3 d data field with the shade of gray weighting, only these borderline volume elements are carried out the synthetic calculating of color, thereby can significantly reduce amount of calculation.In addition, also can give corresponding opacity to different borders, can reflect the mutual relation of different material border in three dimensions flexibly according to the demonstration needs; Improving aspect the image quality, traditional projection imaging's method replaces the normal vector of whole volume elements with the Grad at volume elements center, thereby can't reflect the difference between the pixel that volume elements influences.We are with the mixture of borderline voxel as different material, adopt the relevant Tri linear interpolation of direction to calculate the intersection point of contour surface in direction of visual lines and the voxel, a plurality of pixels that volume elements influenced are existed different intersection points, normal vector according to intersection point carries out photechic effect calculating, thereby can improve the quality of display image; Use projection imaging's method display image at last.
At first, we adopt the shade of gray weighting to extract the border between different material in the 3 d data field, we only give corresponding opacity to these borderline volume elements and carry out the synthetic calculating of brightness according to the needs that show, thereby can significantly reduce amount of calculation, improve speed of displaying; We are the mixture of borderline volume elements as different material, adopt the Tri linear interpolation relevant with direction to calculate the intersection point of contour surface in direction of visual lines and the voxel, carry out the quality of photechic effect calculating with the raising display image according to the normal vector of intersection point; Show final image with projection imaging's method at last.
This method is made up of three steps: the material Boundary Extraction, and the normal vector of contour surface and sight line intersection point calculates and the synthetic calculating of brightness in the different material opacity assignment, border volume elements, and the algorithm sketch map is referring to Fig. 4.
Step 1: material Boundary Extraction
In our method, we only are concerned about the boundary face of different material, and do not consider the contribution of the volume elements of same substance inside to last display image, therefore can significantly reduce the drafting time of 3 d data field.Extraction to the material border is mainly decided according to the physical attribute of 3 d data field.The extraction on material border is belonged to the problem of image segmentation, and this paper will not discuss.We carry out shade of gray weighting processing to 3 d data field, and set corresponding threshold value according to the material border number that will extract, and extract different material borders to judge.The rough function that weighting is handled as shown in Figure 5.
F among Fig. 5 (gray value of v) representing volume elements, g (Grad of v) representing the volume elements center, weighting handle function be w (v)=w 1(v) * w 2(v).If we are expressed as original 3 d data field<V, f 〉, wherein V is the set of all volume elements in the data fields, f is the gray value of volume elements.Then can be expressed as<V w〉through the 3 d data field after the shade of gray weighting.
Step 2: different material opacity assignment
According to the material border number that extracts, be the different opacitys of dissimilar dispensed materials.A kind of opacity of material is big more, and the light that sees through this material volume elements is few more, and the material that is positioned at thereafter is invisible more.On the contrary, opacity is more little, and the transparency of this material is just high more, the material that we are different before and after just can seeing simultaneously.Highlight the needs of which kind of types of materials according to hope, allow the assignment of opacity is adjusted.
To 3 d data field, we are broken down into the set of two dimensional slice data, and preserve the borderline volume elements of material by the mode of section order.To the border volume elements in any section, we define a kind of data structure and describe it.
voxel[i]={x,y,z,tt,op}
Wherein: i represents the sequence number of cutting into slices;
(x, y, z) position of expression boundary voxel in 3 d data field;
Tt represents the type on material border, as skin, muscle or skeleton;
Op represents to give the opacity of the borderline volume elements of the type material.
It is to be noted that the above-mentioned substance border has certain thickness.
Step 3: the normal vector of contour surface and sight line intersection point calculates in the volume elements of border
After extracting the different material border, we can adopt projection imaging's method that borderline volume elements is shown fast.A major defect of projection imaging's method is each volume elements to be difficult to obtain effectively more accurately normal vector carry out photechic effect and calculate, to appear the spatial impression of species distribution suddenly.The many Grad that obtain the volume elements central point with centered difference of traditional method replace the normal vector of volume elements, and this can not reflect the variation of the pixel that a volume elements projects on the screen to be influenced.Webbor (Webber R.E.Ray tracing voxel based data via biquadratic local surface interpolation.TheVisual Computer, 1990,6 (1): 8-15.) according to the gray value of volume elements and 26 adjacent volume elements thereof, go out a curved surface with biquadratic function interpolation in each volume elements, curved surface removes the computing method vector according to this again, this method can reflect the slight change that volume elements projects to be influenced different pixels on the screen, but its amount of calculation is too big, has a strong impact on the speed of imaging.
Borderline volume elements is the mixture of different material, at contour surface of the inner existence of volume elements.The present invention adopts a kind of Tri linear interpolation relevant with direction to try to achieve the intersection point of direction of visual lines and contour surface, and calculates the normal vector of intersection point, and sight line is passed volume elements, and is just passable as long as calculate the photechic effect at intersection point place to the influence of a pixel.We locate the meansigma methods m of gray value as threshold value, with the direction of direction of visual lines as the volume elements interpolate value with 8 summits of volume elements.If two intersection points of sight line and volume elements are respectively p 1And p 2, its value is v 1And v 2, then this ray with the intersection point of the contour surface that defines with m is:
p=(1-k)p 1+kp 2
Wherein:
K must satisfy 0≤k≤1, could guarantee intersection point in volume elements, otherwise we think that this volume elements does not influence the pixel that ray arrives.After calculating the position of intersection point in volume elements, we do the Grad that Tri linear interpolation calculates the intersection point place according to the Grad at the place, 8 summits of volume elements, replace the normal vector of this point to carry out photechic effect calculating, as the brightness contribution of volume elements to the pixel of its influence.
Step 4: the photechic effect of intersection point calculates
The color value at the intersection point place of contour surface is tried to achieve according to the Phong illumination model in direction of visual lines and the border volume elements, that is:
C=I a+K dI l·(N·L)+K sI l·(N·V′)
Wherein,
K d, K sExpression diffuse reflectance and specularity factor;
I a, I lExpression ambient light intensity and main light source intensity;
N, L, V represent that normalized surface normal, light source direction and sight line are in the other direction; V &prime; = V + L | V + L |
The surface normal N of intersection point can be tried to achieve by step 3.
Step 5: the color of pixel is synthetic on the projection plane
We show borderline volume elements fast with the projection imaging method, to any direction of visual lines, borderline volume elements with cut into slices one by one, line by line, pursue the mode that is listed as and project on the projection plane.The synthetic mode of image adopts method from back to front, and its formula is:
C out·α out=C now·α now+(1-α now)·C in·α in
α out=α now+(1-α now)·α in
Wherein: C Now, α NowThe color and the opacity at contour surface intersection point place in expression sight line and the volume elements, C In, α InRepresent color and opacity before sight line enters intersection point, C Out, α OutExpression is superimposed with the color at intersection point place and the analog value after the opacity.
We distribute a relief area IB, and size is the number of picture elements on the projection plane, and initial value is made as background color, and after borderline volume elements began projection, to a plurality of pixel p that each volume elements influenced, IB (P) was updated to C Outα Out, processed entirely until all border volume elements, each pixel color that is final display image that then comprises in the relief area.
Show that through experiment effective based on the 3 d data field multilist plane display method display image of volume drawing can clearly reflect the overall situation in the 3 d data field, and the speed of drawing is very fast, has reached the requirement of real-time demonstration substantially.
Embodiment:
We apply to us with the method and design voluntarily in the 3 d medical images processing and analytical system of realization.The 3 d medical images Treatment Analysis 3dmed of system based on microcomputer that we develop is under microcomputer Windows NT and Windows98 environment, adopt Object Oriented method and soft project standard, realize with C Plus Plus, handle and analytical system towards the 3-D view of medical domain.Native system has abundant graph and image processing and analytic function, not only has perfect two dimensional image Treatment Analysis function, and has powerful three-dimensional process and functions such as analysis, network transmission and storage.The function that system provides comprises a series of functions such as data input, image data management, two-dimensional process, three-dimensional data processing, section reorganization, three dimensional display, surgical simulation, virtual endoscope, PACS and remote diagnosis.
The following describes and utilize three-dimensional Euclidean distance conversion to realize the specific implementation process that soft tissue shows in the CT image.Experimental data is a human body Cranial Computed Tomography image, totally 58 sections, and spacing 1.5mm, resolution is 512 * 512 * 16.
1) at first reads data by data-interface.
2) click layering the Show Button, enter the soft tissue display interface.
3) enter soft tissue layering display interface after, if there has been the two-value file that in the past generates can select " showing existing two-value file "; Otherwise select " newly-built two-value file " button.The latter will open and generate the two-value file interface, and the user can select certain partitioning algorithm to extract the profile border, return soft tissue layering display interface after cutting apart end.Newly-built or open existing two-value file and be presented at original image below.
4) in tonal range and depth bounds frame, insert data; By " layering demonstration ", system will carry out peeling again and handle the two dimensional image after right figure can demonstrate processing after range conversion.
5) preserve one group of newly-generated two dimensional slice data.
6) open newly-generated CT image, enter volume drawing parameter-definition interface, can surface, skeleton, muscle be told level, adopt the method for volume drawing to carry out three dimensional display then by classification function.Figure is referring to Fig. 6 for the soft tissue display effect.
Above-mentioned experimental result realizes that to utilizing three-dimensional Euclidean distance conversion the theory analysis conclusion of soft tissue display packing in the CT image is consistent with the inventor, has high credibility, applicability and admissibility.

Claims (9)

1. a method of utilizing three-dimensional Euclidean distance conversion to realize that soft tissue shows in the CT image comprises step:
(1) extraction step extracts contour of object from medical image;
(2) variable in distance step uses three-dimensional Euclidean distance Fast transforms to generate range image;
(3) treatment step (peeling processing) is handled the new three-dimensional data that certain depth and gray scale are satisfied in generation to three-dimensional data;
(4) step display is classified and volume drawing to newly-generated three-dimensional data.
2. by the described method of claim 1, it is characterized in that described extraction step be the range conversion step pre-treatment step.
3. by the described method of claim 1, it is characterized in that described three-dimensional Euclidean distance Fast transforms adopts optimization method, described optimization method is to reduce to need to participate in the distance two dimensional image number of plies and the number of " stain " wherein relatively.
4. by the described method of claim 3, only needing it is characterized in that a pixel in each two dimensional image is made once distance and calculate, by obtaining nearest stain after the comparison.
5. by the described method of claim 3, it is characterized in that n two dimensional image halved.
6. by the described method of claim 5, it is characterized in that n two dimensional image carries out 1,2,4,8..., the n/2 five equilibrium carries out log altogether 2N five equilibrium.
7. by the described method of claim 6, it is characterized in that the nearest stain that waits all pixels in the two dimensional image of branch center for trying to achieve needing to participate in the number of times of distance calculating less than 1.5n 3
8. by the described method of claim 1, it is characterized in that described three-dimensional Euclidean distance Fast transforms comprises:
Step 1: n two-dimentional bianry image is carried out the two-dimensional distance conversion, try to achieve each pixel (i, j, k) the nearest stain in its place two dimensional image, be designated as M (i, j, k);
Step 2: each pixel in n two dimensional image (i, j, n) successively with n-1 some M (i, j, k)
(k=1,2 ..., n-1) carrying out distance calculating and compare, nearest point is it three
Nearest stain in the dimension space;
Step 3: it is 2 that isodisperse variable d initial value is set;
Step 4: n two dimensional image carried out the d five equilibrium, and the planar sequence number q of five equilibrium is followed successively by: 1 d n , 3 d n , . . . n 2 d - 1 d n , and q<n;
Step 5: calculate the nearest stain of pixel in q the two dimensional image successively, q = 1 d n , 3 d n , . . . , 2 d - 1 d n :
(i, j is q) in the stain possible range nearest with it for each pixel in q two dimensional image
Some M (i, j, k) carry out successively distance calculate with relatively, nearest point is it
Nearest stain in three dimensions; Step 6: if , then scanned n two dimensional image, finish; Otherwise once wait last
Divide on the basis and halve, promptly d=2d returns step 4 and continues computing.
9. by the described method of claim 1, it is characterized in that described three-dimensional data being classified and volume drawing comprises step:
(1) material Boundary Extraction;
(2) different material opacity assignment;
(3) normal vector of contour surface and sight line focus calculates in the volume elements of border;
(4) photechic effect of focus calculates;
(5) color of pixel is synthetic on the projection plane.
CN01142133A 2001-09-13 2001-09-13 3D Euclidean distance transformation process for soft tissue display in CT image Pending CN1403057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN01142133A CN1403057A (en) 2001-09-13 2001-09-13 3D Euclidean distance transformation process for soft tissue display in CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN01142133A CN1403057A (en) 2001-09-13 2001-09-13 3D Euclidean distance transformation process for soft tissue display in CT image

Publications (1)

Publication Number Publication Date
CN1403057A true CN1403057A (en) 2003-03-19

Family

ID=4676648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN01142133A Pending CN1403057A (en) 2001-09-13 2001-09-13 3D Euclidean distance transformation process for soft tissue display in CT image

Country Status (1)

Country Link
CN (1) CN1403057A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1315102C (en) * 2003-12-12 2007-05-09 西北工业大学 Method and system for transparent roaming body of object
CN1331100C (en) * 2003-12-22 2007-08-08 李浩宇 Establishing method of 3D interacting model of human skeleton unknown body and its use
CN100583164C (en) * 2007-01-24 2010-01-20 中国科学院自动化研究所 Method for abstracting grade framework and stereo decomposing of arborescence figure
CN101393644B (en) * 2008-08-15 2010-08-04 华中科技大学 Hepatic portal vein tree modeling method and system thereof
CN102395320A (en) * 2010-06-02 2012-03-28 奥林巴斯医疗株式会社 Medical apparatus and method for controlling the medical apparatus
CN101542535B (en) * 2007-02-22 2012-06-27 汤姆科技成像系统有限公司 Method and apparatus for representing 3D image records in 2D images
CN102548479A (en) * 2009-10-06 2012-07-04 皇家飞利浦电子股份有限公司 Automatic c-arm viewing angles for structural heart disease treatment
CN102737361A (en) * 2012-06-20 2012-10-17 四川师范大学 Method for transforming full distance of three-dimensional binary image
CN101075346B (en) * 2005-11-23 2012-12-05 爱克发医疗保健公司 Method for point-of-interest attraction in digital images
CN104933751A (en) * 2015-07-20 2015-09-23 上海交通大学医学院附属瑞金医院 Angiocarpy coronary artery enhanced volume rendering method and system based on local histograms
CN105455830A (en) * 2014-09-29 2016-04-06 西门子股份公司 Method for selecting a recording area and system for selecting a recording area
CN106999122A (en) * 2014-08-15 2017-08-01 牛津大学创新有限公司 Tissues surrounding vascular characterizing method
CN108257202A (en) * 2017-12-29 2018-07-06 四川师范大学 A kind of medical image volume based on usage scenario rebuilds optimization method
TWI670682B (en) * 2018-05-11 2019-09-01 台達電子工業股份有限公司 Image distance transformation apparatus and method using bi-directional scan
CN110415792A (en) * 2019-05-31 2019-11-05 上海联影智能医疗科技有限公司 Image detecting method, device, computer equipment and storage medium
CN110796693A (en) * 2019-09-11 2020-02-14 重庆大学 Method for directly generating two-dimensional finite element model from industrial CT slice image
CN112215799A (en) * 2020-09-14 2021-01-12 北京航空航天大学 Automatic classification method and system for grinded glass lung nodules
CN117115468A (en) * 2023-10-19 2023-11-24 齐鲁工业大学(山东省科学院) Image recognition method and system based on artificial intelligence

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1315102C (en) * 2003-12-12 2007-05-09 西北工业大学 Method and system for transparent roaming body of object
CN1331100C (en) * 2003-12-22 2007-08-08 李浩宇 Establishing method of 3D interacting model of human skeleton unknown body and its use
CN101075346B (en) * 2005-11-23 2012-12-05 爱克发医疗保健公司 Method for point-of-interest attraction in digital images
CN100583164C (en) * 2007-01-24 2010-01-20 中国科学院自动化研究所 Method for abstracting grade framework and stereo decomposing of arborescence figure
CN101542535B (en) * 2007-02-22 2012-06-27 汤姆科技成像系统有限公司 Method and apparatus for representing 3D image records in 2D images
CN101393644B (en) * 2008-08-15 2010-08-04 华中科技大学 Hepatic portal vein tree modeling method and system thereof
CN102548479A (en) * 2009-10-06 2012-07-04 皇家飞利浦电子股份有限公司 Automatic c-arm viewing angles for structural heart disease treatment
CN102548479B (en) * 2009-10-06 2014-11-26 皇家飞利浦电子股份有限公司 Automatic C-arm viewing angles for structural heart disease treatment
CN102395320A (en) * 2010-06-02 2012-03-28 奥林巴斯医疗株式会社 Medical apparatus and method for controlling the medical apparatus
CN102395320B (en) * 2010-06-02 2014-02-26 奥林巴斯医疗株式会社 Medical apparatus and method for controlling the medical apparatus
CN102737361A (en) * 2012-06-20 2012-10-17 四川师范大学 Method for transforming full distance of three-dimensional binary image
CN102737361B (en) * 2012-06-20 2014-11-05 四川师范大学 Method for transforming full distance of three-dimensional binary image
CN106999122A (en) * 2014-08-15 2017-08-01 牛津大学创新有限公司 Tissues surrounding vascular characterizing method
US10695023B2 (en) 2014-08-15 2020-06-30 Oxford University Innovation Limited Method for characterisation of perivascular tissue
CN106999122B (en) * 2014-08-15 2020-10-20 牛津大学创新有限公司 Perivascular tissue characterization method
CN105455830B (en) * 2014-09-29 2018-12-07 西门子股份公司 System for selecting the method for record area and for selecting record area
CN105455830A (en) * 2014-09-29 2016-04-06 西门子股份公司 Method for selecting a recording area and system for selecting a recording area
CN104933751A (en) * 2015-07-20 2015-09-23 上海交通大学医学院附属瑞金医院 Angiocarpy coronary artery enhanced volume rendering method and system based on local histograms
CN104933751B (en) * 2015-07-20 2017-10-20 上海交通大学医学院附属瑞金医院 The enhanced object plotting method of cardiovascular coronary artery and system based on local histogram
CN108257202A (en) * 2017-12-29 2018-07-06 四川师范大学 A kind of medical image volume based on usage scenario rebuilds optimization method
CN108257202B (en) * 2017-12-29 2021-09-10 四川师范大学 Medical image volume reconstruction optimization method based on use scene
TWI670682B (en) * 2018-05-11 2019-09-01 台達電子工業股份有限公司 Image distance transformation apparatus and method using bi-directional scan
CN110415792A (en) * 2019-05-31 2019-11-05 上海联影智能医疗科技有限公司 Image detecting method, device, computer equipment and storage medium
CN110415792B (en) * 2019-05-31 2022-03-25 上海联影智能医疗科技有限公司 Image detection method, image detection device, computer equipment and storage medium
CN110796693A (en) * 2019-09-11 2020-02-14 重庆大学 Method for directly generating two-dimensional finite element model from industrial CT slice image
CN110796693B (en) * 2019-09-11 2023-03-21 重庆大学 Method for directly generating two-dimensional finite element model from industrial CT slice image
CN112215799A (en) * 2020-09-14 2021-01-12 北京航空航天大学 Automatic classification method and system for grinded glass lung nodules
CN117115468A (en) * 2023-10-19 2023-11-24 齐鲁工业大学(山东省科学院) Image recognition method and system based on artificial intelligence
CN117115468B (en) * 2023-10-19 2024-01-26 齐鲁工业大学(山东省科学院) Image recognition method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN1403057A (en) 3D Euclidean distance transformation process for soft tissue display in CT image
US9984460B2 (en) Automatic image segmentation methods and analysis
Preim et al. A survey of perceptually motivated 3d visualization of medical image data
Udupa Three-dimensional visualization and analysis methodologies: a current perspective
CN112086197B (en) Breast nodule detection method and system based on ultrasonic medicine
EP0988620B1 (en) Image segmentation method
CN1663530A (en) Methods and apparatus for processing image data to aid in detecting disease
CN1452089A (en) Semiautomatic PET tumor image section algorithm
CN1759812A (en) Ultrasonic diagnostic equipment and image processing method
Cipriano et al. Deep segmentation of the mandibular canal: a new 3D annotated dataset of CBCT volumes
Mankovich et al. Three-dimensional image display in medicine
WO2018161257A1 (en) Method and system for generating colour medical images
CN112820399A (en) Method and device for automatically diagnosing benign and malignant thyroid nodules
Brinkley A flexible, generic model for anatomic shape: Application to interactive two-dimensional medical image segmentation and matching
CN116188452A (en) Medical image interlayer interpolation and three-dimensional reconstruction method
CN1430185A (en) Ultralarge scale medical image surface reconstruction method based on single-layer surface tracking
Tiago et al. A data augmentation pipeline to generate synthetic labeled datasets of 3D echocardiography images using a GAN
CN105787978A (en) Automatic medical image interlayer sketching method, device and system
CN108399354A (en) The method and apparatus of Computer Vision Recognition tumour
Durgadevi et al. Deep survey and comparative analysis of medical image processing
CN116402756A (en) X-ray film lung disease screening system integrating multi-level characteristics
CN115439650A (en) Kidney ultrasonic image segmentation method based on CT image cross-mode transfer learning
CN111127636B (en) Intelligent complex intra-articular fracture desktop-level three-dimensional diagnosis system
Amini Head circumference measurement with deep learning approach based on multi-scale ultrasound images
CN114387380A (en) Method for generating a computer-based visualization of 3D medical image data

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication