WO2004068401A2 - Method for the interactive segmentation of an object in a three-dimensional data record - Google Patents

Method for the interactive segmentation of an object in a three-dimensional data record Download PDF

Info

Publication number
WO2004068401A2
WO2004068401A2 PCT/IB2004/000189 IB2004000189W WO2004068401A2 WO 2004068401 A2 WO2004068401 A2 WO 2004068401A2 IB 2004000189 W IB2004000189 W IB 2004000189W WO 2004068401 A2 WO2004068401 A2 WO 2004068401A2
Authority
WO
WIPO (PCT)
Prior art keywords
segmentation
voxels
steps
intermediate image
dimensional
Prior art date
Application number
PCT/IB2004/000189
Other languages
French (fr)
Other versions
WO2004068401A3 (en
Inventor
Christian Lorenz
Thorsten SCHLATHÖLTER
Steffen Renisch
Ingwer Carlsen
Original Assignee
Philips Intellectual Property & Standards Gmbh
Koninklijke Philips Electronics N. V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Intellectual Property & Standards Gmbh, Koninklijke Philips Electronics N. V. filed Critical Philips Intellectual Property & Standards Gmbh
Publication of WO2004068401A2 publication Critical patent/WO2004068401A2/en
Publication of WO2004068401A3 publication Critical patent/WO2004068401A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the invention relates to a method for the interactive, voxel-based segmentation and visualization of a three-dimensional object in a three-dimensional, in particular medical, data record. Furthermore, the invention relates to an image processing device for carrying out the method and to a computer program for controlling the image processing device.
  • voxel-based in this context refers to segmentation methods in which the direct result is a set of voxels which represents the segmented object. That is to say that the result is not, for example, the surface of the segmented object as in other known methods, by means of which it is of course also possible to generate a set of voxels, but rather the voxel-based segmentation method supplies this set of voxels directly.
  • Neighboring voxels of the seed voxel are then examined with respect to an assignment criterion to ascertain whether they do or do not belong to the object.
  • Neighboring voxels are in this context all voxels which according to any definition of neighboring are assigned to a given voxel as a neighbor. This definition of neighboring may define as neighboring voxels, for example, the set of all voxels adjoining a given voxel.
  • the assignment criterion may be, for example, the fact of being within a range of values of the data values of the voxels. If a data value lies within the range of values, then the corresponding voxel is assigned to the object that is to be segmented. These voxels which are assigned to the object are referred to below as object voxels.
  • all neighboring voxels that adjoin the object voxels and have not yet been considered are then examined with respect to the assignment criterion and where appropriate also assigned to the object. The examination of the neighboring voxels of newly determined object voxels and the subsequent assignment to the object is repeated until no more object voxels can be determined or until another termination criterion is met. This termination criterion may be, for example, a predetermined number of iterations.
  • Input parameters may be, for example, the maximum number of iterations, the number of seed voxels and the position thereof in the three-dimensional data record or else the assignment criterion.
  • the quality of the segmentation achieved in this way crucially depends on the quality of the assignment criterion. It may occur that voxels are assigned to the object even though they do not belong thereto, or that voxels which do belong to the object are not assigned to it - or both. It is therefore often useful to repeat the segmentation a number of times with different assignment criteria, that is to say for example with different ranges of values, in order to obtain an optimum segmentation result. This repetition of the complete segmentation procedure, which may in some circumstances be carried out a number of times, is very time-intensive.
  • This object is achieved according to the invention by a method for the interactive, voxel-based segmentation of a three-dimensional object in a three-dimensional, in particular medical, data record with on-going visualization of the respective segmentation status, where the segmentation comprises a number of segmentation steps, having the steps: a) carrying out of at least one segmentation step, where each segmentation step supplies a set of object voxels, b) updating of, or after the first segmentation step generation of, a two- dimensional intermediate image with the aid of the object voxels, c) repetition of steps a) and b) at least once, or proceeding with step d), d) visualization of the segmentation status with the aid of the intermediate image, e) repetition of steps a) to d) until a termination criterion is met.
  • step b) a two-dimensional intermediate image is updated or generated, which two-dimensional intermediate image is used to visualize the three-dimensional, already segmented part of the object.
  • This use of a two- dimensional image to display a three-dimensional object considerably reduces the computational complexity in visualization step d).
  • the resulting rapid visualization of the segmentation status allows display of the already segmented part of the object during the segmentation.
  • a user for example a radiologist, can therefore view the progress of the segmentation and intervene during the segmentation process in this process, for example by stopping the segmentation or changing input parameters without awaiting the end of segmentation. This leads to a reduced time requirement.
  • Claim 2 describes an embodiment which reduces the computational complexity for updating of the intermediate image in step b).
  • Claim 3 describes an embodiment which further reduces the computational complexity in visualization step d).
  • Claim 4 describes a preferred type of updating of the intermediate image, which in the use of this intermediate image as described in claim 5 leads to good-quality visualizations.
  • Claims 6 and 7 explain that the method according to the invention may comprise an expansion mode and additionally also a contraction mode. This gives the user the possibility of choosing between these two modes. If the segmentation has not yet progressed far enough, the user will choose the expansion mode and thus continue the region growing process. If, on the other hand, the segmentation has progressed too far, so that the segmentation has already passed into regions of the three-dimensional data record which no longer belong to the object, then the user will choose the contraction mode, where the segmentation is reversed in a voxel by voxel manner so that the stages of segmentation which have already passed can be returned to. In this way, the best possible segmentation in the case of the given input parameters can be readily achieved.
  • Claim 8 describes an assignment criterion and claim 9 describes a termination criterion, said criteria leading to good segmentation results.
  • Claim 10 describes an image processing device for carrying out the method according to the invention
  • Claim 11 describes a computer program for such an image processing device.
  • FIG. 1 shows an image processing device that is suitable for carrying out the method according to the invention.
  • Fig. 2 shows a flowchart of the method according to the invention.
  • Fig. 3 and Fig. 4 show a schematic view of a segmented object and of a projection plane and intermediate image, respectively.
  • the image processing device shown in Fig. 1 comprises an image processing and control processor 1 having a memory 2 in which a three-dimensional, in particular medical, data record can be stored.
  • the image processing and control processor 1 is connected via a bus system 3, for example via a glass fiber cable 3, to a medical diagnosis device (not shown), such as a magnetic resonance, computer-aided tomography or ultrasound device.
  • a medical diagnosis device such as a magnetic resonance, computer-aided tomography or ultrasound device.
  • the connection to a diagnosis device may be omitted.
  • the object segmented with the aid of the image processing and control processor 1, or the intermediate results of the segmentation, that is to say the segmentation status, is/are displayed on a monitor 4.
  • Fig. 2 shows the progress of a segmentation method which can be carried out using the image processing device shown in Fig. 1.
  • step 102 a three-dimensional, medical data record is loaded which contains the object that is to be segmented.
  • the data record has been reconstructed from measured values which have been generated by means of a computer-aided tomography scanner.
  • the reconstructed data values are gray values, e.g. Hounsfield values.
  • a non-medical data record, or else a different type of data values, i.e. not gray values, could also be used to carry out the method according to the invention.
  • the data values could also have been generated by other three- dimensional imaging methods and devices, such as magnetic resonance or ultrasound methods and devices.
  • step 104 input parameters are selected, the number and type of which is dependent on the chosen segmentation method.
  • any voxel-based segmentation method which supplies a set of object voxels after each segmentation step may be used.
  • a segmentation method having an expansion mode is described, which as input parameters comprises a seed voxel, an assignment criterion and the number of segmentation steps between two successive visualizations.
  • the assignment criterion is in this case a range of values, where it is assumed that a voxel having a data value which lies within this range of values forms part of the object that is to be segmented.
  • a viewing direction may be defined, that is to say the direction from which a viewer sees the segmented object after visualization.
  • more than one seed voxel could also be specified, and this is necessary, for example, when two spatially separate objects, e.g. the two shoulder blades, are to be segmented in the three-dimensional data record. It is also possible to define different types of seed voxels in order to be able to distinguish different objects of a similar consistency, for example to distinguish the shoulder blades from the spinal column. In such a case, a different assigmnent criterion may apply for each type of seed voxel.
  • a segmentation step is carried out starting from the seed voxel or from the already determined object voxels.
  • the seed voxel forms part of the object that is to be segmented.
  • a check is made as to whether the data values or gray values of the neighboring voxels of the voxels that already belong to the segmented object lie within the range of values, where only voxels which have not already been considered in previous passes are examined.
  • the neighboring voxels are voxels which adjoin a given voxel.
  • this voxel is assigned to the object that is to be segmented and designated an object voxel.
  • the segmented object therefore consists of the voxels which belonged to the object prior to this step and of the neighboring voxels which lie within the range of values. In this way, the object expands. This is therefore referred to as the expansion mode.
  • step 106 supplies a set of object voxels (voxel-based segmentation method).
  • object voxels voxel-based segmentation method.
  • the way in which this set of voxels is generated may vary from embodiment to embodiment.
  • step 107 a check is made as to whether the changes to the set of object voxels in step 106 have changed a two-dimensional intermediate image. If this is the case, this change is taken into account and the intermediate image is updated. After the first segmentation step, there is as yet no intermediate image and an intermediate image is therefore generated.
  • the intermediate image is described with the aid of the arrangement shown in Fig. 3. In said figure, a few object voxels 10 are shown, these being partially visible from the viewing direction 16. It is assumed in this case that an object voxel is visible if it can be struck by a ray 12 that is oriented parallel to the viewing direction 16. In other embodiments, the rays 12 could also run in a divergent manner from a point.
  • the object voxels are assumed to be non-transparent, so that in each case only the voxel 10 which comes first with respect to the viewing direction 16 is seen.
  • the visible object voxels 10 are shown by hatching.
  • a projection plane 14 is arranged perpendicular to the viewing direction 16.
  • the distance of said voxel in the viewing direction from the projection plane 14 is determined. Furthermore, for each visible object voxel 10, the projection location is determined, that is to say the location in the projection plane 14 at which the object voxel 10 is projected in the viewing direction. The distance of each object voxel 10 that has become visible is then assigned to the projection location of this voxel, where this distance assignment displays the two-dimensional intermediate image.
  • the intermediate image can be stored in the memory 2 of the image processing device in a known manner. This type of intermediate image is also referred to as a "z buffer".
  • a newly determined object voxel changes the intermediate image if it can be seen in the viewing direction and possibly hides an object voxel which was previously visible. If this is the case, the intermediate image is updated by determining, for the newly determined object voxel, the distance from the projection plane and assigning it to the projection location of this object voxel. The distance and the projection location of the object voxel which is now possibly hidden by the new object voxel are erased from the intermediate image. In the memory 2 of the image processing device, note is taken of which points of the intermediate image have changed since the last visualization.
  • the segmentation step 106 could also be carried out a number of times before continuing with step 107.
  • step 108 a check is made as to whether the number of segmentation steps between two successive visualizations, said number having been indicated in step 104, has already passed. If this is the case, or if no new object voxels have been determined in step 106, then the method proceeds with step 110. Otherwise, steps 106 and 107 are repeated.
  • step 110 the object voxels which are visible in the viewing direction are visualized. That is to say only the surface of the object is displayed. In this case, the visualization is effected with the aid of the two-dimensional intermediate image, without using the three-dimensional set of voxels that has already been assigned to the object. Therefore, during the visualization it is not necessary to carry out computing operations using the voxels of the three-dimensional object, and this considerably reduces the computational complexity.
  • the brightness of the voxel that is to be displayed depends on the angle between the surface normal of the object at the location of the voxel and the viewing direction. If the surface normal is oriented parallel to the viewing direction, then the voxel is displayed in light color. If the normal is oriented perpendicular to the viewing direction, then the voxel is displayed in dark color.
  • This type of display is a simple version of "phong shading". "Phong shading” is described, for example, in “Computer Graphics: Principle and Practice", D. Foley, Addison- Wesley, 1997, pp. 738 ff.
  • the tangential plane of the surface at this point is determined. This tangential plane is approximated by a plane which is spanned by the vectors
  • (x k , y l ) designates the projection locations at which the visible object voxels are projected in the viewing direction, where the x positions x k and the y positions y k refer to the coordinate system 18 in Fig. 3.
  • the distance of a visible object voxel 10 from the projection plane 14 in the viewing direction 16 is designated z kl .
  • n kl is determined for each visible object voxel and thus for each projection location (x k , y l ) .
  • the brightness values of the visible object voxels and of the projection locations (x k , y t ) axe determined.
  • a fictitious light source illuminates the object in the viewing direction with rays which are parallel to one another.
  • the intensity I u in the viewing direction of the radiation reflected by the surface is then to a rough approximation proportional to the value of the scalar product of the surface normal and the normalized viewing direction, where in this example of embodiment it is assumed that the viewing direction is the z direction:
  • a brightness value which is proportional to I kl is now assigned to each point (X k ' y ⁇ ) • These brightness values are displayed on the monitor 4 at the points (x k , y, ) and show the visualization of the object segmented up to this iteration step.
  • the visualization steps can be changed depending on the desired visualization quality.
  • the surface could be smoothed using known smoothing techniques prior to determination of the distances z kl or the surface normals.
  • a radiation source with divergent rays could also be used, and this would result in a corresponding adaptation of equation (3).
  • the calculated surface normals n kl and intensities I kl are stored in the memory 2, so that these values can be accessed during subsequent visualization steps.
  • the determination of the surface normal n kl and of the intensity I kl is then carried out only for those projection locations (x k , y, ) and distances assigned thereto that have changed since the last visualization. This limitation considerably reduces the computational complexity and allows the progress of the segmentation to be displayed in real time.
  • a number of seed voxels have been placed in different objects that are to be segmented, it is useful to display the different segmented objects in different colors.
  • the brightness value is then visualized not in gray values but rather in different brightness stages of the color of the respective object.
  • step 112 a check is made as to whether in step
  • step 114 a user has the possibility to intervene in the segmentation method.
  • step 116 If the user stops the segmentation, for example by pressing a key on the keyboard 5, then the method proceeds to step 116. Otherwise, the segmentation continues with the next segmentation step 106.
  • step 116 the user can choose between changing the input parameters or terminating the segmentation method. If the user chooses the latter option, the method ends at step 118. Otherwise the method proceeds with step 120.
  • step 120 the user has the possibility to change input parameters. The user can change, for example, the assignment criterion, that is to say in this case the range of values, or the number of segmentation steps between two visualizations. If the user changes the viewing direction, then the intermediate image is computed anew as described above for all object voxels that are now visible. After the input parameters have been changed, the method proceeds with segmentation step 106.
  • the voxels displayed in step 110 show the segmentation of the object.
  • step 106 contraction of the object may take place (contraction mode), in which voxels which have already been assigned to the object are removed again. This may be useful, for example, if parts of the object which hide the region that is important for a specific use or which do not form part of the object have already been segmented.
  • the choice between the expansion mode and the contraction mode may be made by the user in step 120.
  • step 106 a check is then firstly made as to which mode the user has selected.
  • step 106 is carried out as described above, with note additionally being taken in the memory 2 during each segmentation step of which voxels have been assigned to the object.
  • step 107 If in the contraction mode a visible object voxel is removed, in step 107 the corresponding projection location and the corresponding distance are erased from the intermediate image. In addition, it is ascertained whether an object voxel which has been hidden up to this point can now be seen. If this is the case, then for this voxel the distance in the viewing direction from the projection plane is determined and assigned to the projection location of this voxel. This assignment is added to the intermediate image. Note is taken in the memory 2 of which regions of the intermediate image, that is to say which projection locations and distances, have changed since the last visualization.
  • a check is not mode in step 112 as to whether new object voxels have been determined in step 106. Instead, a check is made as to whether there are no longer any object voxels. If this is the case, the method ends at step 118.
  • a two-dimensional intermediate image is generated by means of which it is possible to visualize the segmentation status.
  • the updating of the intermediate image is preferably carried out on the basis of the object voxels which have been added or removed since the last updating of the intermediate image.
  • this intermediate image may also be generated by the known maximum-intensity projection or minimum-intensity projection methods.
  • MIP maximum-intensity projection
  • straight lines 22 which are parallel to one another and to the viewing direction 26 can be defined, which straight lines 22 pass through the object 20 (see Fig. 4).
  • the straight lines 22 are distributed uniformly over the projection plane 24.
  • the number of straight lines 22 is for example 512 2 .
  • account is then taken of the set of data values, that is to say gray values or Hounsfield values, whose object voxels lie on the respective straight line. Of this set, the largest data value is determined and assigned to the projection location of the object voxel corresponding to the data value.
  • This assigmnent shows the two-dimensional intermediate image.
  • the assigned data values can be displayed on a monitor unchanged at their projection locations.
  • the intermediate image is therefore in this case directly the visualization of the object voxels.
  • minimum-intensity projection (mlP) the method is the same as in the case of MIP, with the smallest data value of the set of data values whose object voxels lie on a straight line being determined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Generation (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention relates to a method for the interactive, voxel-based segmentation and visualization of a three-dimensional object in a three-dimensional, in particular medical, data record. In this case, the progress of the segmentation is visualized during the segmentation, and a user can intervene during the segmentation process in this process. For this purpose, the segmentation takes place in steps, where each segmentation step supplies a set of object voxels by means of which a two-dimensional intermediate image is generated or updated. After each segmentation step or after a number of segmentation steps, the already segmented part of the object is visualized with the aid of the two-dimensional intermediate image.

Description

Method for the interactive segmentation of an object in a three-dimensional data record
The invention relates to a method for the interactive, voxel-based segmentation and visualization of a three-dimensional object in a three-dimensional, in particular medical, data record. Furthermore, the invention relates to an image processing device for carrying out the method and to a computer program for controlling the image processing device.
The expression "voxel-based" in this context refers to segmentation methods in which the direct result is a set of voxels which represents the segmented object. That is to say that the result is not, for example, the surface of the segmented object as in other known methods, by means of which it is of course also possible to generate a set of voxels, but rather the voxel-based segmentation method supplies this set of voxels directly.
Known methods of the type mentioned above are based on a region growing or region expansion process, in which a user defines a so-called seed voxel in an object that is to be segmented. Neighboring voxels of the seed voxel are then examined with respect to an assignment criterion to ascertain whether they do or do not belong to the object. Neighboring voxels are in this context all voxels which according to any definition of neighboring are assigned to a given voxel as a neighbor. This definition of neighboring may define as neighboring voxels, for example, the set of all voxels adjoining a given voxel. The assignment criterion may be, for example, the fact of being within a range of values of the data values of the voxels. If a data value lies within the range of values, then the corresponding voxel is assigned to the object that is to be segmented. These voxels which are assigned to the object are referred to below as object voxels. In the next step, all neighboring voxels that adjoin the object voxels and have not yet been considered are then examined with respect to the assignment criterion and where appropriate also assigned to the object. The examination of the neighboring voxels of newly determined object voxels and the subsequent assignment to the object is repeated until no more object voxels can be determined or until another termination criterion is met. This termination criterion may be, for example, a predetermined number of iterations. Once the segmentation method has been terminated, the object voxels are displayed on a display device, such as a monitor.
This method has the disadvantage that the segmentation result cannot be viewed until after termination of the segmentation. Any correction of the segmented object or of input parameters that may be necessary is accordingly only possible after termination of the segmentation. Input parameters may be, for example, the maximum number of iterations, the number of seed voxels and the position thereof in the three-dimensional data record or else the assignment criterion.
The quality of the segmentation achieved in this way crucially depends on the quality of the assignment criterion. It may occur that voxels are assigned to the object even though they do not belong thereto, or that voxels which do belong to the object are not assigned to it - or both. It is therefore often useful to repeat the segmentation a number of times with different assignment criteria, that is to say for example with different ranges of values, in order to obtain an optimum segmentation result. This repetition of the complete segmentation procedure, which may in some circumstances be carried out a number of times, is very time-intensive.
It is an object of the present invention to specify a method in which the segmentation of a three-dimensional object and visualization thereof is less time-intensive. This object is achieved according to the invention by a method for the interactive, voxel-based segmentation of a three-dimensional object in a three-dimensional, in particular medical, data record with on-going visualization of the respective segmentation status, where the segmentation comprises a number of segmentation steps, having the steps: a) carrying out of at least one segmentation step, where each segmentation step supplies a set of object voxels, b) updating of, or after the first segmentation step generation of, a two- dimensional intermediate image with the aid of the object voxels, c) repetition of steps a) and b) at least once, or proceeding with step d), d) visualization of the segmentation status with the aid of the intermediate image, e) repetition of steps a) to d) until a termination criterion is met.
By contrast with known methods, in step b) a two-dimensional intermediate image is updated or generated, which two-dimensional intermediate image is used to visualize the three-dimensional, already segmented part of the object. This use of a two- dimensional image to display a three-dimensional object considerably reduces the computational complexity in visualization step d). The resulting rapid visualization of the segmentation status allows display of the already segmented part of the object during the segmentation. A user, for example a radiologist, can therefore view the progress of the segmentation and intervene during the segmentation process in this process, for example by stopping the segmentation or changing input parameters without awaiting the end of segmentation. This leads to a reduced time requirement.
Claim 2 describes an embodiment which reduces the computational complexity for updating of the intermediate image in step b).
Claim 3 describes an embodiment which further reduces the computational complexity in visualization step d).
Claim 4 describes a preferred type of updating of the intermediate image, which in the use of this intermediate image as described in claim 5 leads to good-quality visualizations.
Claims 6 and 7 explain that the method according to the invention may comprise an expansion mode and additionally also a contraction mode. This gives the user the possibility of choosing between these two modes. If the segmentation has not yet progressed far enough, the user will choose the expansion mode and thus continue the region growing process. If, on the other hand, the segmentation has progressed too far, so that the segmentation has already passed into regions of the three-dimensional data record which no longer belong to the object, then the user will choose the contraction mode, where the segmentation is reversed in a voxel by voxel manner so that the stages of segmentation which have already passed can be returned to. In this way, the best possible segmentation in the case of the given input parameters can be readily achieved.
Claim 8 describes an assignment criterion and claim 9 describes a termination criterion, said criteria leading to good segmentation results.
Claim 10 describes an image processing device for carrying out the method according to the invention, and claim 11 describes a computer program for such an image processing device.
The invention will be further described with reference to examples of embodiments shown in the drawings to which, however, the invention is not restricted. Fig. 1 shows an image processing device that is suitable for carrying out the method according to the invention.
Fig. 2 shows a flowchart of the method according to the invention. Fig. 3 and Fig. 4 show a schematic view of a segmented object and of a projection plane and intermediate image, respectively.
The image processing device shown in Fig. 1 comprises an image processing and control processor 1 having a memory 2 in which a three-dimensional, in particular medical, data record can be stored. The image processing and control processor 1 is connected via a bus system 3, for example via a glass fiber cable 3, to a medical diagnosis device (not shown), such as a magnetic resonance, computer-aided tomography or ultrasound device. In other embodiments, if the data record has been stored in the memory 2 in a different manner, the connection to a diagnosis device may be omitted. The object segmented with the aid of the image processing and control processor 1, or the intermediate results of the segmentation, that is to say the segmentation status, is/are displayed on a monitor 4. The user can access the image processing and control processor 1 by means of a keyboard 5 or by means of other input devices that are not shown in Fig. 1, and thus can influence the progress of the segmentation method. Fig. 2 shows the progress of a segmentation method which can be carried out using the image processing device shown in Fig. 1.
Following the initialization in step 100, in step 102 a three-dimensional, medical data record is loaded which contains the object that is to be segmented. The data record has been reconstructed from measured values which have been generated by means of a computer-aided tomography scanner. The reconstructed data values are gray values, e.g. Hounsfield values. In other embodiments, a non-medical data record, or else a different type of data values, i.e. not gray values, could also be used to carry out the method according to the invention. Furthermore, the data values could also have been generated by other three- dimensional imaging methods and devices, such as magnetic resonance or ultrasound methods and devices.
In step 104, input parameters are selected, the number and type of which is dependent on the chosen segmentation method. According to the invention, any voxel-based segmentation method which supplies a set of object voxels after each segmentation step may be used. In this example of embodiment, a segmentation method having an expansion mode is described, which as input parameters comprises a seed voxel, an assignment criterion and the number of segmentation steps between two successive visualizations. The assignment criterion is in this case a range of values, where it is assumed that a voxel having a data value which lies within this range of values forms part of the object that is to be segmented. In addition, a viewing direction may be defined, that is to say the direction from which a viewer sees the segmented object after visualization.
In other embodiments, more than one seed voxel could also be specified, and this is necessary, for example, when two spatially separate objects, e.g. the two shoulder blades, are to be segmented in the three-dimensional data record. It is also possible to define different types of seed voxels in order to be able to distinguish different objects of a similar consistency, for example to distinguish the shoulder blades from the spinal column. In such a case, a different assigmnent criterion may apply for each type of seed voxel.
In step 106, a segmentation step is carried out starting from the seed voxel or from the already determined object voxels. In the first pass, only the seed voxel forms part of the object that is to be segmented. A check is made as to whether the data values or gray values of the neighboring voxels of the voxels that already belong to the segmented object lie within the range of values, where only voxels which have not already been considered in previous passes are examined. In this example of embodiment, the neighboring voxels are voxels which adjoin a given voxel. If a data value of a neighboring voxel lies in this range of values, then this voxel is assigned to the object that is to be segmented and designated an object voxel. Following a segmentation step, the segmented object therefore consists of the voxels which belonged to the object prior to this step and of the neighboring voxels which lie within the range of values. In this way, the object expands. This is therefore referred to as the expansion mode.
According to the invention, it is important that step 106 supplies a set of object voxels (voxel-based segmentation method). The way in which this set of voxels is generated may vary from embodiment to embodiment.
In step 107, a check is made as to whether the changes to the set of object voxels in step 106 have changed a two-dimensional intermediate image. If this is the case, this change is taken into account and the intermediate image is updated. After the first segmentation step, there is as yet no intermediate image and an intermediate image is therefore generated. The intermediate image is described with the aid of the arrangement shown in Fig. 3. In said figure, a few object voxels 10 are shown, these being partially visible from the viewing direction 16. It is assumed in this case that an object voxel is visible if it can be struck by a ray 12 that is oriented parallel to the viewing direction 16. In other embodiments, the rays 12 could also run in a divergent manner from a point. The object voxels are assumed to be non-transparent, so that in each case only the voxel 10 which comes first with respect to the viewing direction 16 is seen. The visible object voxels 10 are shown by hatching. A projection plane 14 is arranged perpendicular to the viewing direction 16.
In order to generate the intermediate image, for each visible object voxel 10, that is to say initially for the seed voxel, the distance of said voxel in the viewing direction from the projection plane 14 is determined. Furthermore, for each visible object voxel 10, the projection location is determined, that is to say the location in the projection plane 14 at which the object voxel 10 is projected in the viewing direction. The distance of each object voxel 10 that has become visible is then assigned to the projection location of this voxel, where this distance assignment displays the two-dimensional intermediate image. The intermediate image can be stored in the memory 2 of the image processing device in a known manner. This type of intermediate image is also referred to as a "z buffer".
A newly determined object voxel changes the intermediate image if it can be seen in the viewing direction and possibly hides an object voxel which was previously visible. If this is the case, the intermediate image is updated by determining, for the newly determined object voxel, the distance from the projection plane and assigning it to the projection location of this object voxel. The distance and the projection location of the object voxel which is now possibly hidden by the new object voxel are erased from the intermediate image. In the memory 2 of the image processing device, note is taken of which points of the intermediate image have changed since the last visualization.
In other embodiments, the segmentation step 106 could also be carried out a number of times before continuing with step 107.
In step 108, a check is made as to whether the number of segmentation steps between two successive visualizations, said number having been indicated in step 104, has already passed. If this is the case, or if no new object voxels have been determined in step 106, then the method proceeds with step 110. Otherwise, steps 106 and 107 are repeated. In step 110, the object voxels which are visible in the viewing direction are visualized. That is to say only the surface of the object is displayed. In this case, the visualization is effected with the aid of the two-dimensional intermediate image, without using the three-dimensional set of voxels that has already been assigned to the object. Therefore, during the visualization it is not necessary to carry out computing operations using the voxels of the three-dimensional object, and this considerably reduces the computational complexity.
The brightness of the voxel that is to be displayed depends on the angle between the surface normal of the object at the location of the voxel and the viewing direction. If the surface normal is oriented parallel to the viewing direction, then the voxel is displayed in light color. If the normal is oriented perpendicular to the viewing direction, then the voxel is displayed in dark color. This type of display is a simple version of "phong shading". "Phong shading" is described, for example, in "Computer Graphics: Principle and Practice", D. Foley, Addison- Wesley, 1997, pp. 738 ff.
In order to determine the surface normal of the surface at the location of an object voxel that is to be displayed, the tangential plane of the surface at this point is determined. This tangential plane is approximated by a plane which is spanned by the vectors
Figure imgf000009_0001
Here, (xk , yl ) designates the projection locations at which the visible object voxels are projected in the viewing direction, where the x positions xk and the y positions yk refer to the coordinate system 18 in Fig. 3. The indices k and / count up the individual projection locations. The distance of a visible object voxel 10 from the projection plane 14 in the viewing direction 16 is designated zkl .
The surface normal is then given by the normalized cross-product
V ,., X V n. (2) χ v;c/
In this way, a surface normal nkl is determined for each visible object voxel and thus for each projection location (xk , yl) .
Hereinbelow, the brightness values of the visible object voxels and of the projection locations (xk , yt) axe, determined. In this context, it is assumed that a fictitious light source illuminates the object in the viewing direction with rays which are parallel to one another. The intensity Iu in the viewing direction of the radiation reflected by the surface is then to a rough approximation proportional to the value of the scalar product of the surface normal and the normalized viewing direction, where in this example of embodiment it is assumed that the viewing direction is the z direction:
Figure imgf000010_0001
A brightness value which is proportional to Ikl is now assigned to each point (Xk ' yι ) • These brightness values are displayed on the monitor 4 at the points (xk , y, ) and show the visualization of the object segmented up to this iteration step.
In another embodiment, the visualization steps can be changed depending on the desired visualization quality. For example, the surface could be smoothed using known smoothing techniques prior to determination of the distances zkl or the surface normals. A radiation source with divergent rays could also be used, and this would result in a corresponding adaptation of equation (3).
The calculated surface normals nkl and intensities Ikl are stored in the memory 2, so that these values can be accessed during subsequent visualization steps. The determination of the surface normal nkl and of the intensity Ikl is then carried out only for those projection locations (xk , y, ) and distances assigned thereto that have changed since the last visualization. This limitation considerably reduces the computational complexity and allows the progress of the segmentation to be displayed in real time.
If, in other embodiments, a number of seed voxels have been placed in different objects that are to be segmented, it is useful to display the different segmented objects in different colors. The brightness value is then visualized not in gray values but rather in different brightness stages of the color of the respective object.
Following the visualization, in step 112 a check is made as to whether in step
106 no new object voxels could be determined. If this is the case, the method ends at step
118. Otherwise, the method proceeds with step 114. In step 114, a user has the possibility to intervene in the segmentation method.
If the user stops the segmentation, for example by pressing a key on the keyboard 5, then the method proceeds to step 116. Otherwise, the segmentation continues with the next segmentation step 106.
In step 116, the user can choose between changing the input parameters or terminating the segmentation method. If the user chooses the latter option, the method ends at step 118. Otherwise the method proceeds with step 120. In step 120, the user has the possibility to change input parameters. The user can change, for example, the assignment criterion, that is to say in this case the range of values, or the number of segmentation steps between two visualizations. If the user changes the viewing direction, then the intermediate image is computed anew as described above for all object voxels that are now visible. After the input parameters have been changed, the method proceeds with segmentation step 106.
Following termination of the method in step 118, the voxels displayed in step 110 show the segmentation of the object.
In another embodiment, as an alternative to expanding the object by determining new object voxels (expansion mode), in step 106 contraction of the object may take place (contraction mode), in which voxels which have already been assigned to the object are removed again. This may be useful, for example, if parts of the object which hide the region that is important for a specific use or which do not form part of the object have already been segmented. The choice between the expansion mode and the contraction mode may be made by the user in step 120. In step 106, a check is then firstly made as to which mode the user has selected. In the expansion mode, step 106 is carried out as described above, with note additionally being taken in the memory 2 during each segmentation step of which voxels have been assigned to the object. The order of assignment is thus stored. If the contraction mode has been selected, then in a reverse segmentation step those voxels which according to the order of assignment were assigned to the object last are removed from the object, so that the segmentation in the contraction mode is carried out backward in reverse order.
If in the contraction mode a visible object voxel is removed, in step 107 the corresponding projection location and the corresponding distance are erased from the intermediate image. In addition, it is ascertained whether an object voxel which has been hidden up to this point can now be seen. If this is the case, then for this voxel the distance in the viewing direction from the projection plane is determined and assigned to the projection location of this voxel. This assignment is added to the intermediate image. Note is taken in the memory 2 of which regions of the intermediate image, that is to say which projection locations and distances, have changed since the last visualization.
Moreover, in the contraction mode, a check is not mode in step 112 as to whether new object voxels have been determined in step 106. Instead, a check is made as to whether there are no longer any object voxels. If this is the case, the method ends at step 118. As already mentioned above, according to the invention a two-dimensional intermediate image is generated by means of which it is possible to visualize the segmentation status. In this case, the updating of the intermediate image is preferably carried out on the basis of the object voxels which have been added or removed since the last updating of the intermediate image. In other embodiments, this intermediate image may also be generated by the known maximum-intensity projection or minimum-intensity projection methods.
Therefore, for example, in the case of maximum-intensity projection (MIP) for determining an intermediate image it is possible to define a projection plane 24 which is oriented perpendicular to the viewing direction 26. In addition, straight lines 22 which are parallel to one another and to the viewing direction 26 can be defined, which straight lines 22 pass through the object 20 (see Fig. 4). The straight lines 22 are distributed uniformly over the projection plane 24. The number of straight lines 22 is for example 5122. For each straight line 22, account is then taken of the set of data values, that is to say gray values or Hounsfield values, whose object voxels lie on the respective straight line. Of this set, the largest data value is determined and assigned to the projection location of the object voxel corresponding to the data value. This assigmnent shows the two-dimensional intermediate image. The assigned data values can be displayed on a monitor unchanged at their projection locations. The intermediate image is therefore in this case directly the visualization of the object voxels. In the case of minimum-intensity projection (mlP) the method is the same as in the case of MIP, with the smallest data value of the set of data values whose object voxels lie on a straight line being determined.
LIST OF REFERENCES:
1 image processing and cc trol processor
2 memory
3 bus system
4 monitor
5 keyboard
10,20 object voxels
12,22 rays
14,24 projection plane
16,26 viewing direction
18 coordinate system

Claims

CLAIMS:
1. A method for the interactive, voxel-based segmentation of a three-dimensional object in a three-dimensional, in particular medical, data record with on-going visualization of the respective segmentation status, where the segmentation comprises a number of segmentation steps, having the steps: a) carrying out of at least one segmentation step, where each segmentation step supplies a set of object voxels, b) updating of, or after the first segmentation step generation of, a two- dimensional intermediate image with the aid of the object voxels, c) repetition of steps a) and b) at least once, or proceeding with step d), d) visualization of the segmentation status with the aid of the intermediate image, e) repetition of steps a) to d) until a termination criterion is met.
2. A method as claimed in claim 1, characterized in that in step b) the intermediate image is updated with the aid of that part of the set of obj ect voxels which has changed since the last updating of the intermediate image.
3. A method as claimed in claim 1, characterized in that in step d), in order to visualize the segmentation status, use is made only of those regions of the intermediate image which have changed since the last visualization.
4. A method as claimed in claim 1, characterized in that the intermediate image is generated in step b) as an assignment of the distance with the aid of a projection plane (14) which is oriented perpendicular to a predefined viewing direction (16), having the following steps: i) determination of those object voxels (10) which have become visible from the viewing direction (16) in the at least one segmentation step, ii) determination of in each case one projection location by projecting each object voxel that has become visible in the viewing direction onto the projection plane, iii) determination of the distance in the viewing direction (16) of each of these visible object voxels (10) from the projection plane, iv) assignment of the distance of each object voxel that has become visible to the projection location of this object voxel.
5. A method as claimed in claim 4, characterized in that in step d) the visualization comprises the following steps: i) determination of the surface normal of the surface of the segmented object at the points at which those object voxels (10) are located which since the last visualization have brought about a change in the intermediate image, with the aid of the distances and projection locations contained in the intermediate image, ii) assignment of in each case one brightness value to each projection location, such that the respective brightness value having the value of the projection of the surface normal at the point at which the object voxel corresponding to the projection location is located becomes greater in the viewing direction (16), iii) displaying of the brightness values at the projection locations on an image display unit, in particular on a monitor.
6. A method as claimed in claim 1 , characterized by an expansion mode in which the at least one segmentation step in step a) comprises the following steps: i) determination of the assignment of the neighboring voxels of object voxels or of the neighboring voxels of a predefined voxel to the object with the aid of an assignment criterion, ii) assignment to the object of the neighboring voxels which in accordance with the assignment criterion belong to the object.
7. A method as claimed in claim 6, characterized in that, in the expansion mode, the order in which the voxels are assigned to the object is stored in the segmentation steps, and in that the method comprises a contraction mode, in which contraction mode, based on the respective segmentation status, in one or more segmentation steps the voxels are removed from the segmented object in the reverse order with respect to the order in which they were assigned to the object in the expansion mode.
8. A method as claimed in claim 6, characterized in that the assignment criterion comprises a range of values and in that all voxels whose data values lie within the range of values belong to the object.
9. A method as claimed in claim 1, characterized in that the termination criterion is defined by the fact that no new object voxels could be determined or no more object voxels could be removed in the last segmentation step.
10. An image processing device for carrying out the method as claimed in claim 1, having
- a memory unit in particular for storing at least one three-dimensional, in particular medical, data record, one segmented object and one intermediate image,
- an image display unit for displaying the at least one three-dimensional data record and for displaying segmented objects, - a calculation unit, in particular for calculating segmentation steps, intermediate images and visualizations,
- a control unit for controlling the memory unit, the image display unit and the calculation unit for the voxel-based, interactive segmentation of a three-dimensional object in a three-dimensional, in particular medical, data record with continuous visualization of the respective segmentation status, where the segmentation comprises a number of segmentation steps, having the steps: a) carrying out of at least one segmentation step, where each segmentation step supplies a set of object voxels, b) updating of, or after the first segmentation step generation of, a two- dimensional intermediate image with the aid of the object voxels, c) repetition of steps a) and b) at least once, or proceeding with step d), d) visualization of the segmentation status with the aid of the intermediate image, e) repetition of steps a) to d) until a termination criterion is met.
11. A computer program for a control unit as claimed in claim 10 for the voxel- based, interactive segmentation of a three-dimensional object in a three-dimensional, in particular medical, data record with on-going visualization of the respective segmentation status, where the segmentation comprises a number of segmentation steps, having the steps: a) carrying out of at least one segmentation step, where each segmentation step supplies a set of object voxels, b) updating of, or after the first segmentation step generation of, a two- dimensional intermediate image with the aid of the object voxels, c) repetition of steps a) and b) at least once, or proceeding with step d), d) visualization of the segmentation status with the aid of the intermediate image, e) repetition of steps a) to d) until a termination criterion is met.
PCT/IB2004/000189 2003-01-30 2004-01-23 Method for the interactive segmentation of an object in a three-dimensional data record WO2004068401A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03100191 2003-01-30
EP03100191.0 2003-01-30

Publications (2)

Publication Number Publication Date
WO2004068401A2 true WO2004068401A2 (en) 2004-08-12
WO2004068401A3 WO2004068401A3 (en) 2005-07-07

Family

ID=32798996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/000189 WO2004068401A2 (en) 2003-01-30 2004-01-23 Method for the interactive segmentation of an object in a three-dimensional data record

Country Status (1)

Country Link
WO (1) WO2004068401A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007136968A2 (en) 2006-05-19 2007-11-29 Koninklijke Philips Electronics, N.V. Error adaptive functional imaging
US8144987B2 (en) 2005-04-13 2012-03-27 Koninklijke Philips Electronics N.V. Method, a system and a computer program for segmenting a surface in a multi-dimensional dataset

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0516047A2 (en) * 1991-05-27 1992-12-02 Hitachi, Ltd. Method of and apparatus for processing multi-dimensional data

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0516047A2 (en) * 1991-05-27 1992-12-02 Hitachi, Ltd. Method of and apparatus for processing multi-dimensional data

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CARR J C; GEE A H; PRAGER R W; DALTON K J: "Quantitative visualisation of surfaces from volumetric data" 6TH INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND VISUALIZATION 1998, 1998, pages 57-64, XP008046666 PLZEN, CZECH REPUBLIC *
R.C. GONZALES, R. E. WOODS: "Digital Image Processing" 2002, PRENTICE HALL , NEW JERSEY, USA , XP002327739 page 612 - page 613 *
REVOL-MULLER C ET AL: "Automated 3D region growing algorithm based on an assessment function" PATTERN RECOGNITION LETTERS, NORTH-HOLLAND PUBL. AMSTERDAM, NL, vol. 23, no. 1-3, January 2002 (2002-01), pages 137-150, XP004324064 ISSN: 0167-8655 *
SHIN B-S: "EFFICIENT NORMAL ESTIMATION USING VARIABLE-SIZE OPERATOR" JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION, vol. 10, no. 2, April 1999 (1999-04), pages 91-107, XP001059488 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8144987B2 (en) 2005-04-13 2012-03-27 Koninklijke Philips Electronics N.V. Method, a system and a computer program for segmenting a surface in a multi-dimensional dataset
WO2007136968A2 (en) 2006-05-19 2007-11-29 Koninklijke Philips Electronics, N.V. Error adaptive functional imaging
WO2007136968A3 (en) * 2006-05-19 2008-05-15 Koninkl Philips Electronics Nv Error adaptive functional imaging
RU2449371C2 (en) * 2006-05-19 2012-04-27 Конинклейке Филипс Электроникс Н.В. Error adaptive functional imaging
US8170308B2 (en) 2006-05-19 2012-05-01 Koninklijke Philips Electronics N.V. Error adaptive functional imaging

Also Published As

Publication number Publication date
WO2004068401A3 (en) 2005-07-07

Similar Documents

Publication Publication Date Title
US7529396B2 (en) Method, computer program product, and apparatus for designating region of interest
US5793375A (en) Image processing apparatus for forming a surface display image
US5630034A (en) Three-dimensional image producing method and apparatus
US7639867B2 (en) Medical image generating apparatus and method, and program
US8659602B2 (en) Generating a pseudo three-dimensional image of a three-dimensional voxel array illuminated by an arbitrary light source by a direct volume rendering method
CN107924580A (en) The visualization of surface volume mixing module in medical imaging
US11229377B2 (en) System and method for next-generation MRI spine evaluation
JP2014030693A (en) Image processor, medical image diagnostic apparatus, image processing method, and image processing program
CN111325825B (en) Method for determining the illumination effect of a volume data set
US6891537B2 (en) Method for volume rendering
JP4122314B2 (en) Projection image processing method, projection image processing program, and projection image processing apparatus
US7738701B2 (en) Medical image processing apparatus, ROI extracting method and program
EP3989172A1 (en) Method for use in generating a computer-based visualization of 3d medical image data
US11423554B2 (en) Registering a two-dimensional image with a three-dimensional image
JP4376944B2 (en) Intermediate image generation method, apparatus, and program
JP7003635B2 (en) Computer program, image processing device and image processing method
JP2005525863A (en) Medical inspection system and image processing for integrated visualization of medical data
WO2004068401A2 (en) Method for the interactive segmentation of an object in a three-dimensional data record
US20220198667A1 (en) Method and device for extracting blood vessel wall
Bornik et al. Interactive editing of segmented volumetric datasets in a hybrid 2D/3D virtual environment
JPH1011604A (en) Shading method by volume rendering method and device therefor
WO2014030262A1 (en) Shape data-generating program, shape data-generating method and shape data-generating device
US12002147B2 (en) Method and system for optimizing distance estimation
Kalbe et al. Hardware‐Accelerated, High‐Quality Rendering Based on Trivariate Splines Approximating Volume Data
Ropinski et al. Interactive importance-driven visualization techniques for medical volume data

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase