GB2431308A - 3D manual segmentation by placement of constraint points - Google Patents

3D manual segmentation by placement of constraint points Download PDF

Info

Publication number
GB2431308A
GB2431308A GB0520801A GB0520801A GB2431308A GB 2431308 A GB2431308 A GB 2431308A GB 0520801 A GB0520801 A GB 0520801A GB 0520801 A GB0520801 A GB 0520801A GB 2431308 A GB2431308 A GB 2431308A
Authority
GB
United Kingdom
Prior art keywords
user
volume
points
constraint
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0520801A
Other versions
GB0520801D0 (en
Inventor
Andreas Lars Gunnar Eriksson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elekta AB
Original Assignee
Elekta AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elekta AB filed Critical Elekta AB
Priority to GB0520801A priority Critical patent/GB2431308A/en
Publication of GB0520801D0 publication Critical patent/GB0520801D0/en
Publication of GB2431308A publication Critical patent/GB2431308A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/028Multiple view windows (top-side-front-sagittal-orthogonal)

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method of defining a volume is described, comprising the steps of displaying for a user at least one cross-section through a region containing the volume, accepting from the user details of a plurality of edge points 40,44,48, each defining a limit to the volume, selecting a three dimensional surface that includes the edge points and displaying the intersection of that surface with the displayed cross-section, accepting from the user details of a further edge point 52 (fig 10); and selecting a further three dimensional surface that includes the edge points and the further edge point, and displaying the intersection of the further surface with the displayed cross-section. By revising the displayed surface interactively as the user adds points, the user can easily see the surface that is being created and the areas where the surface does not correlate with the underlying image. Suitable functions for the surface include radial basis functions of the type <EMI ID=2.1 HE=13 WI=52 LX=867 LY=1504 TI=MF> <PC>```where D (r), r>0 is the basis function, p is a polynomial of low degree, N is the number of constraint points, and Ò is the Euclidean norm. The basis function D (r) can be a function such as r, r<2> log r, r<3> or r<4> log r. To solve the function, a surface constraint point xi is treated as specifying that f(xi) = 0 and inside constraint point xi specify that f(xi) = 1. The application also relates to a method of defining a volume by way of such a formula. It is preferred that the computed surface is exported to a treatment planning system so that it can be used for effective treatment of the tumour represented by the surface.

Description

3D manual segmentation by placement of constraint points
FIELD OF THE INVENTION
The present invention relates to the interpretation of images.
BACKGROUND ART
There are a variety of reasons why images may need to be interpreted.
One very common reason, to which the examples in this application relate, is the processing of medical images to extract clinically useful information from them.
The various forms of medical diagnostic apparatus today produce a large volume of images which need to be interpreted by a suitably qualified clinician.
Sometimes, the interpretation is qualitative, for example the interpretation of an x-ray image to identify an ingested foreign body. However, in other circumstances the interpretation needs to be quantitative, such as the identification of the extent of a tumour prior to radiotherapy treatment.
In such a quantitative situation, the accuracy of the interpretation can be exceedingly important. Radiotherapy depends on the use of a harmful beam of radiation to destroy cancerous cells in its path, and thus the size and direction of the beam is chosen (and varied) so as to minimise the irradiation of healthy tissue an maximise the irradiation of cancerous tissue. If cancerous tissue is not irradiated then the tumour is more easily able to recur, whereas the irradiation of healthy tissue adds to the side effects of treatment, thereby prolonging the recovery period and limiting the radiotherapeutic dose that can be given.
Thus, it is necessary to identify the extent of the tumour in the medical image so that the information can be passed to a treatment planning system.
Such systems use the known physics of the radiotherapy treatment to design a treatment plan that will deliver a radiation dose to a defined volume and minimise the dose to surrounding tissue, taking into account any particularly sensitive areas. They therefore require an accurate statement as to the extent of the volume to be irradiated.
This statement is obtained by a process known as "segmentation". It requires a medical professional to view an image and mark the extent of the tumour. Users can often spend a considerable time segmenting different anatomical structures, either to define the target volume that should be irradiated in a radiation treatment planning application or simply to measure the volume of a tumour. If that time could be reduced then the medical professional could be employed more efficiently.
Automatic segmentation methods exist, but often have problems producing the correct result. Manual segmentation techniques are therefore an important tool. Given how time consuming and tedious manual segmentation can be, and the increased amount of medical images produced today, a quick way of achieving manual 3D segmentation would be of great value.
There are two basic methods of manual segmentation used in present treatment planning applications, paintbrush methods and contouring methods.
With both methods the user is presented with one or several 2D views of tomographic 3D data, such as a CT or an MRI image series, and controls the segmentation by drawing in one of the 2D views.
With the paintbrush method, the user "paints" the area to include in the segmentation using a tool similar to paintbrushing tools in image handling applications such as Photoshop. The output of the segmentation is a discrete volume of voxels. The value of a voxel is true if the voxel is included in the segmentation (the voxel has been painted by the user), otherwise the value of the voxel is false. By painting several 2D views it is possible to achieve a 3D result. This process can be very time consuming, however. The segmentation of the liver to a precision of 1 mm (i.e. the length of each side of the voxels of the discrete volume is 1 mm) would require painting more than 100 2D cross- sections of the liver. Needless to say, this would take a considerable time for the user.
The contour approach works in a similar way, but instead of painting the whole arc to segment, the user outlines the arc by drawing the contour around it. By stacking multiple closed 2D contours on top of each other a 3D volume is produced. Usually the 2D contours are stored as closed 2D polygons. Each 2D contour could be given a thickness so that each 2D contour corresponds to a 3D (generalized) cylinder. This still requires the user to draw more than 100 2D contours to segment an organ such as the liver to a precision of 1 mm.
To reduce the number of contours that the user has to draw, some systems use a technique called contour interpolation. This tries to predict how a contour situated between two user drawn contours looks like. Most contour interpolation techniques works by trying to connect points on a contour A with corresponding points on a contour B. The corresponding points are connected by straight lines. The intersection of those lines with the plane in which the interpolated contour should be created gives the vertices of the interpolated contour. Determining the best correspondence of points on the two contours is a difficult problem, especially when the two contours differ much in shape. Results are often produced that differ significantly from the user's expectations, or results that are impossible in that the surface is self-intersecting. Even more difficult is when the topology changes such as when one contour in a slice branches into two contours in a neighbouring slice.
The method disclosed by Chandrajit L. Bajaj et al in "Arbitrary topology shape reconstruction from planar cross sections", Graphical models and image processing: GMIP, 58(6): 524-543, 1996 (URL http://citeseer.ist.psu. edu/ bajaj96arbitrary.html) tries to address these problems, but requires a significant implementation effort. Also, since the two contours are connected using straight lines, the result is often far from the smooth shapes of most anatomical structures. This means that the users have to draw a large number of contours to get an accurate segmentation. Other details can be found in J. Carr, W. Fright, and R. Beatson "Surface interpolation with radial basis functions for medical imaging, 1997" (URL http://citeseer. ifi.unizh.ch/carrg7surface.html).
Another restriction of the contouring approach is that all 2D contours must be parallel, or at least not intersect each other. This restricts the user to do all contouring using a single orientation. This is a severe restriction, since one often wishes to contour different parts of a structure using different views. It is common for a user to see that the segmentation is inaccurate in a view that has a different orientation to that in which the contours have been drawn, and to wish to correct the segmentation in that view. The user is then forced to navigate to the position where the segmentation is off in the original view, and correct the error there. This may be difficult.
G. Turk and 3. O'Brien, "Shape transformation using variational implicit functions", 1999 (URL http://citeseer. ifi. unizh.ch/turkggshape.html) uses implicit surfaces and radial basis functions to interpolate between the contours of a hip joint. However, it is not an interactive system and only addresses the problem of interpolating between existing contours. 3. Carr et al in "Surface interpolation with radial basis functions for medical imaging", 1997, (URL http://citeseer. ifi.unizh.ch/carrg7surface. html) uses the same techniques to smoothly interpolate the surface of the skull over defect regions with the purpose of creating cranial implants. However, this does not provide an interactive system.
G. Turk and 3. O'Brien, "Modelling with Implicit Surfaces that Interpolate", ACM Transactions on Graphics, Vol 21, No 4, October 2002, describes the mathematical methods necessary for determining three dimensional surfaces based on inside, surface, and outside constraint points. The examples given of suitable applications are animation and improved rendering of polygonal surfaces.
SUMMARY OF THE INVENTION
The present invention therefore provides a method of defining a volume, comprising the steps of displaying for a user at least one cross-section through a region containing the volume; accepting from the user details of a plurality of edge points, each defining a limit to the volume; selecting a three dimensional surface that includes the edge points and displaying the intersection of that surface with the displayed cross- section; accepting from the user details of a further edge point; and selecting a further three dimensional surface that includes the edge points and the further edge point, and displaying the intersection of the further surface with the displayed cross-section.
By revising the displayed surface interactively as the user adds (or removes or moves) points, the user can easily see the surface that is being created and the areas where the surface does not correlate with the underlying image. This means that the user can see where a greater number of points are required and where fewer are needed, and respond accordingly. Time is not wasted telling the system what it can easily work out, and the defined surface quickly approximates the desired shape.
Ideally, a plurality of cross-sections are displayed for the user, such as three cross sections intersecting at a point in the volume. This allows the display to confirm to a Cartesian co-ordinate system aligned with the axial, sagittal and coronal axes with which physicians are familiar. The user should be allowed to zoom and/or pan the cross- section(s) so that the appropriate region can be viewed, and so that different sections can be viewed to see the shape of the surface in those sections and, if necessary, add further edge points.
The details of inner and/or outer points can also be accepted. These are points lying within the volume to be defined and a points lying outside the volume to be defined, respectively. These can assist in defining the surface.
The surface can computed as an implicit surface that is the solution in three dimensions of a function of the type f(x, y, z) = 0. Suitable functions include radial basis functions of the type f(x) = p(x)+,x-x) where; ri(r), r>O is the basis function, p is a polynomial of low degree, N is the number of constraint points, and 1J is the Euclidean norm.
The basis function D(r) can be a function such as r, r2 log r, or r4 log r.
To solve the function, a surface constraint point x, is treated as specifying that f(x,) = 0 and inside constraint point x, specify that f(x, ) = 1. Outside constraint points can specify that f(x,) = -1, or +1 followed by inversion of the surface if there are no inside constraint points.
The present invention also provides, in another aspect, a method of defining a volume, comprising the steps of displaying for a user at least one cross-section through a region containing the volume; accepting from the user details of a plurality of edge points, each defining a limit to the volume; selecting a three dimensional surface that includes the edge points and displaying the intersection of that surface with the displayed cross-section; wherein the three dimensional surface is computed as an implicit surface that is the solution in three dimensions of a radial basis function of the type f(x) = p(x)+(x_xI) where; cD(r), r>.0 is the basis function, p is a polynomial of low degree, N is the number of constraint points, and H is the Euclidean norm.
In this second aspect, the invention provides a means of deriving a realistic surface from only a small number of points, regardless of how they are input and in which order.
In either aspect, it is preferred that the computed surface is exported to a treatment planning system so that it can be used for effective treatment of the tumour represented by the surface.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the present invention will now be described by way of example, with reference to the accompanying figures in which; Figure 1 shows the sequence of events required in order to treat a patient; Figures 2 and 3 represent the development of a best fit two dimensional shape as constraint points are added; and Figures 4 to 19 show sequential steps in the definition of a three dimensional volume.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Figure 1 shows the general sequence of events involved in radiotherapy or like treatment. A patient must first be scanned using (for example) CT or MRI scanning, which produces a dataset containing an image of the volume of interest in the patient. That dataset must then be reviewed to identify the tumour, either by a physician or automatically. Once a volume within the dataset has been identified, this information is exported to a treatment planning system which uses the known properties of the radiation beam and the known constraints of the available beam delivery systems to compute a treatment plan that will maximise the dose to the tumour and minimise it elsewhere.
This application relates particularly to the step if identifying the relevant volume within the dataset. It assumes that the dataset is being reviewed by a physician and seeks to provide a means of communicating the information from the physician to the system in a manner that is both quick and accurate.
To avoid the problems with the current methods, we have developed a new way of manual 3D segmentation. Instead of painting the whole area to segment or outlining it, the user indicates one (or more) positions inside the structure to segment, called inside constraint points, and a small number of positions on the surface of the structure, called surface constraint points. Given the constraint points, a surface that surrounds the inside constraint points and that passes through all the surface constraint points is computed.
It should be noted that a process that indicated positions outside the structure instead of points inside the structure would also work. A process of this type would use outside constraint points in a complementary fashion.
The contour of the surface can be displayed in a number of 2D views, superimposed over the diagnostic images so that the user can see how closely the segmentation corresponds to the target structure. The surface should be computed in such a way that it assumes a smooth surface, corresponding to natural shapes. In the embodiment, there are no restrictions on how the constraint points may be placed so the user can (for example) place some constraint points using an axial view of the structure, some in a sagittal view and some in a coronal view. This, in combination with the natural shape of the computed surface, means that only a very small number of constraint points need to be placed by the user.
Accurate 3D segmentations are thus produced very quickly with a minimum of user interaction. A fairly accurate 3D segmentation can often be produced by placing surface constraints in only three orthogonal slices of the structure. The user can then continue and add surface constraint points where it differs much from the target structure until the surface corresponds as closely as desired.
The constraint points are not connected to each other in any way; it is not analogous to creating a polygon in a drawing program where the user clicks at each vertex position and the vertices are connected in order. Instead the "vertices" are connected in such a way that the generated surface is as smooth as possible. This allows for very easy editing since the user does not need to specify how a new constraint point should be connected to the previous points.
Figures 1 and 2 show this in two dimensions. The connectedness of constraints are not specified explicitly. Instead the constraints are connected in a way that produces the smoothest result. Thus, in figure 1, six constraint points 1-6 can be joined by an ellipse 9. Adding two extra constraint points 7, 8 as shown in figure 2 changes the natural shape from an ellipse 9 joining all points to a pair of circles lOa, lOb each joining a specific subset of points.
That the user only specifies surface constraint points and not whole contours is very useful in allowing the user to "draw" on intersecting planes. Two intersecting user-drawn contours are almost guaranteed to differ by a small amount in the points of intersections leading to inconsistencies. Thus, even if a contouring interpolation was available that could handle intersecting contours, it is hard to see how it could deal with inconsistent contours. Another great benefit of only specifying some points on the surface compared to outlining a whole contour is that the user only has to be accurate when placing a relatively small number of constraint points, not during the whole outlining. While outlining where the user typically drags the mouse around the border of a 2D slice of the structure, only a small imprecise movement of the mouse means that the user has to go back and correct the mistake.
Computational method The surface is computed as an implicit surface that is the solution to a function (in 3D) f(x, y, z) = 0. The function is a radial basis function of the form f(x)p(x)+x_x) (1) where cr(r), r>0 is the basis function, p is a polynomial of low degree, N is the number of constraint points and II is the Euclidean norm. In the prototype, the basis function c1(r) = r3 has been used. Other possible choices could be D(r)= r, r2 log r, or D(r)= r4 log r. A surface constraint point x,, specifies that f(x,) 0 and an inside constraint point specifies that f(x,) = 1. The locations of the constraint points are used as centres x, of the basis functions. The set of constraints specifies a linear equation system that is solved for the weights A, and p. This system is solved every time the user adds, removes or changes a constraint point.
The use of radial basis functions to represent 3D shapes is described in more detail by Jonathan C. Carr et al in "Reconstruction and representation of 3D objects with radial basis functions", in SIGGRAPH 2001, Computer Graphics Proceedings, pages 67-76, ACM Press/ACM SIGGRAPH, 2001 (editor: Eugene Fiume). URL http://citeseer.ist.psu.edu/ carrOireconstruction. html. To visualize the implicit surface in a 2D view as a contour, one simple approach is to use the marching squares algorithm as described by (for example) Diana Lingrand at "The marching cubes", (URL http://www.essi.fr/..Ilingrand/MarchjngCubes /algo.html). However, this requires that equation (1) is evaluated once for each point in a 2D grid, resulting in long computation time. A faster way is to only evaluate the function close to the surface, starting from some seed points and following the contour around. The problem is how to find the seed point (or seed points if several contours are present in the 2D slice).
A better approach is to use interval analysis and decomposition techniques to focus evaluations in areas close to the contour without the risk of missing any contours. An algorithm for contouring of implicit curves is disclosed by John M. Snyder at "Interval analysis for computer graphics", SIGGRAPH 92: Proceedings of the 19th annual conference on Computer graphics and interactive techniques, pages 121-130 (ACM Press, New York, NY, USA, 1992. ISBN 0-89791-479-1).
Similar techniques may be used to render the surface in a 3D view by triangulating the implicit surface.
As mentioned above, placing an outside constraint point (i.e. one that is outside the desired surface) would also work. Alternatively, constraint points could be placed both inside and outside.
Example
To assist with an understanding of the manner in which the invention works, figures 4-19 show sequential steps in the demarcation of a tumour area.
The patient concerned suffers from a tumour in the head region which is to be treated by radiotherapy. Magnetic Resonance Imaging (MRI) has already been carried out and has produced a data set which has been loaded into the application. Figure 4 shows the basic application set up which a primary window 12 is divided into four viewing areas. A first viewing area 14 situated in the top left shows the axial view, a second viewing area 16 on the top right shows the sagittal view, whilst a third viewing area 18 at the bottom left shows a coronal view. A fourth viewing area 20 to the bottom right will be described later.
To the left of the four viewing areas is a control area 22 on which are placed sliders 24 to an rotation of the axial, sagittal and coronal views, together with contrast and brightness controls 26 together with buttons 26 to allow the opening, closing and saving of data sets and the like.
On each view, there is a centre point 30 provided for the user. This can be moved via a drag and drop process and represents a point on intersection of the three planes. Thus, moving the centre point 30 to the left in sagittal view 16 shown in figure 1 will change the section that is selected for view in the coronal view 18. Likewise, moving the centre point 30 upwards in the sagittal view 16 will change the section that is shown in the axial view 14. The same applies mutadis mutandis if the centre point 30 is moved in the axial or the coronal views. Thus, the first step in analysing the image is to locate the particular area of interest and this is done by placing centre point 30 over the tumour in those views in which it is visible. In this case, it is visible at 32 in the axial view and at 34 in the coronal view. Once the centre point 30 has been moved in this manner, the tumour will come into view in the sagittal section.
Figure 5 shows the views presented to the user after the centre point 30 has been moved to the correct location and the views zoomed to show the tumour 36 in detail.
Figure 6 shows the first step in defining the volume to be treated. A single point is defined by clicking on one of the three views. Its projection on the other two views will then be displayed and can, if desired, be adjusted by dragging and dropping. This is referred to as the "inside constraint point" 38 and can be used to inform the system of an area that is within the volume to be defined.
Having defined an inside constraint point 30, the user then defines a number of surface constraint points 40 as shown in figure 7. In figure 7, two such surface constraint points 40 have been defined by clicking on the boundary of the tumour in the axial view at two extremities. The system then attempts to solve the formula (above) for the three constraints that are now defined - one inside constraint point 38 and two surface constraint points 40 - and has decided upon a possible solution 42. Clearly, at this early stage in the procedure the proposed solution 42 bears little resemblance to the actual shape of the tumour.
Figure 8 shows a close-up of the axial view, to show further steps in the definition of the surface. A further surface constraint point 44 is added and the system recalculates the proposed surface 46. In this case, given the distribution of the surface constraint points 44, the surface is not closed, but in part of the surface it is beginning to approximate to the shape of the tumour. This is because careful choice of the basis function and weights directs the system to produce surfaces that have a smoothness and curvature which reflects the shapes chosen by biological processes. Although the present surface 46 is not closed and therefore cannot correspond to the tumour, it is clear to the user through the interactive display of the current solution to the formula exactly where the system lacks information as to the tumour shape. In this case, it is apparent that the system needs definition of the tumour in the area opposite the last placed surface constraint point 44. Figure 9 shows the effect of the user clicking at the surface of the tumour to place a still further surface constraint point 48. The system then re-solves the function and proposes a further revised surface 50. In the axial plane displayed, that surface 50 is now beginning to generally resemble the shape of the tumour.
It will however be seen in figure 9 that there are two large areas above and below the centre of the tumour (as viewed) where the proposed surface 50 extends beyond the bounds of the tumour. Thus, it is immediately apparent that further constraint points are needed in that area. Figure 9 shows a further surface constraint point 52 being added in one of those areas. The re-solved surface 54 now more closely resembles the shape of the tumour in that area.
However, in other areas it is still extending beyond the tumour. Figure 11 therefore shows a further surface constraint point 56 being added and the consequences for the re-calculated surface 58. Again this still extends beyond the extent of the tumour in one area and therefore figure 12 shows a still further surface constraint point 60 being added, and its effect on the re-calculated surface 62.
In the axial plane displayed in this view, the proposed surface 62 now closely follows the extent of the tumour. However, no surface constraint points have been placed in the other views, and therefore the proposed shape in three dimensions is likely to be either a cylinder with the cross section shown in figure 12 or a shape that any other planes is elliptical with the extent of the ellipse not yet correlating with the shape of the tumour. As shown in figure 13, therefore, further constraint points 64 are placed by the user in the coronal and sagittal views to begin definition of the shape in three dimensions rather than two.
Figure 13 shows the initial stages of the process in which only a small number of constraint points have been added, and figure 14 shows the process after the addition of further constraint points 66 to produce a closed thee dimensional surface 68. It can be seen in the coronal view of figure 14 that despite the presence of only three surface constraint points at the 12, 3 and 6 o'clock positions as illustrated, the system has nevertheless been able to close the surface 68 on the basis of the information provided by the constraint points placed in the axial and sagittal views.
With reference to the sagittal view of figure 14, it is immediately apparent that in one small area to the 2 o'clock position the proposed surface 68 extends beyond the extent of the tumour. As the presently proposed surface 68 is interactively re-drawn as constraint points are added, it is immediately apparent that further constraint points are necessary in that area to obtain an accurate registration. Likewise, it is apparent that in the 10 o'clock and 5 o'clock areas the system has accurately modelled the tumour size even in the absence of any constraint points. Thus, through the present invention information is passed swiftly back to the user as to where further constraint points are required. Time is not wasted allocating large numbers of constraint points distributed regularly over a surface, and the physician's time is therefore better used. As shown in figure 15, a further surface constraint point 70 is added in the 2 o'clock position on the sagittal view and the revised surface 72 is re-calculated and displayed on all three views.
That re-calculation of the revised surface has created a small discrepancy between the proposed surface 72 and the tumour surface at the 10 o'clock position, and therefore a further constraint point 74 is then added as shown in figure 16. The further revised surface is then shown.
The proposed surface 76 now closely follows the extent of the tumour in the three illustrated planes. It is notable that this has been achieved through only 17 or 18 clicks, as opposed to the very large number of operations required in painting or contouring methods. What is not known is the shape of the surface away from the three planes displayed, and therefore the user can select the central point 30 in one of the three views and drag it so as to illustrate a different plane. As illustrated in figure 17, the central point has been moved in the axial plane, and this has resulted in some constraint points disappearing as they are no longer illustratable. Some constraint points have been illustrated in the coronalview in a dimmed manner, to indicate that they are not in the selected plane but are relatively close thereto. This can help the user to locate constraint points if these require amendment, as otherwise small movements from the central point 30 might otherwise bring the surface constraint points into and out of view before the user could react.
Thus, after moving the central point 30, further constraint points can be added as required so that the final surface closely approximates to the shape of the tumour.
As shown in figure 18, the "polygonize" button can then be pressed to create a three dimensional view 78 in the bottom right section 20 of the view screen 12. As presented, this shows the three intercepting planes 80, 82, 84 that are illustrated in the remaining sections of the view screen 12, together with a three dimensional representation 86 of the surface. In practice, this can be displayed with the three dimensional surface in a contrasting and highly visible colour, shaded as appropriate to illustrate three dimensional structure, against (for example) a black and white representation of the three planes. This will then enable more detail to be distinguished than is possible through figure 18.
Variations and extensions An alternative user interface that has been considered is to let the user specify the normal direction of the surface at each surface constraint point. This would effectively be the same as placing an inside constraint slightly inside the surface constraint point, along the normal direction. Such inside constraints are often referred to as normal constraints, and heavily influence the direction of the normal close to it. The user would no longer have to place any further inside constraints.
To achieve this, a possible sequence of events is: * user clicks mouse button at the border of a structure to place a surface constraint; * a constraint point is placed and an arrow from the constraint point indicating the specified normal of the surface is displayed. The arrow tracks the mouse as the user moves the mouse to set the correct normal direction.
* the user clicks the mouse button again to confirm the normal direction.
* at this time the normal constraint has been placed and is positioned in the current displayed viewplane.
* moving a constraint point also moves the corresponding normal constraint (it stays in the same direction).
* the user can change the orientation of a normal constraint by dragging it with the mouse in any viewplane.
It will of course be understood that many variations may be made to the above-described embodiment without departing from the scope of the present invention.

Claims (12)

1. A method of defining a volume, comprising the steps of; displaying for a user at least one cross-section through a region containing the volume; accepting from the user details of a plurality of edge points, each defining a limit to the volume; selecting a three dimensional surface that includes the edge points and displaying the intersection of that surface with the displayed cross- section; accepting from the user details of a further edge point; and selecting a further three dimensional surface that includes the edge points and the further edge point, and displaying the intersection of the further surface with the displayed cross-section.
2. A method according to claim 1 in which a plurality of cross-sections are displayed for the user.
3. A method according to claim 2 in which the plurality includes three cross sections intersecting at a point in the volume.
4. A method according to any one of the preceding claims in which the user is allowed to zoom and/or pan the cross-section(s).
5. A method according to any one of the preceding claims in which details are accepted from the user of at least one of an inner and an outer point, being a point lying within the volume to be defined and a point lying outside the volume to be defined, respectively.
6. A method according to any one of the preceding claims in which the surface is computed as an implicit surface that is the solution in three dimensions of a function of the type f(x, y, z) = 0.
7. A method according to any one of the preceding claims in which the surface is computed as an implicit surface that is the solution in three dimensions of a radial basis function of the type f(x) p(x)+,x-x1) where; cD(r), r>O is the basis function, p is a polynomial of low degree, N is the number of constraint points, and 11 is the Euclidean norm.
8. A method according to claim 7 in which the basis function D(r) is one selected from the group consisting of: r r2 log r r4 log r.
9. A method according to any one of claims 6 to 8 in which, to solve the function, a surface constraint point x, is treated as specifying that f(x, ) = 0.
10. A method according to any one of claims 6 to 9 in which, to solve the function, an inside constraint point x, specifies that f(x,) = 1.
11. A method of defining a volume, comprising the steps of; displaying for a user at least one cross-section through a region containing the volume; accepting from the user details of a plurality of edge points, each defining a limit to the volume; selecting a three dimensional surface that includes the edge points and displaying the intersection of that surface with the displayed cross- section; wherein the three dimensional surface is computed as an implicit surface that is the solution in three dimensions of a radial basis function of the type f(x) = where; D(r), r>O is the basis function, p is a polynomial of low degree, N is the number of constraint points, and is the Euclidean norm.
12. A method according to any one of the preceding claims in which the computed surface is exported to a treatment planning system
GB0520801A 2005-10-13 2005-10-13 3D manual segmentation by placement of constraint points Withdrawn GB2431308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0520801A GB2431308A (en) 2005-10-13 2005-10-13 3D manual segmentation by placement of constraint points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0520801A GB2431308A (en) 2005-10-13 2005-10-13 3D manual segmentation by placement of constraint points

Publications (2)

Publication Number Publication Date
GB0520801D0 GB0520801D0 (en) 2005-11-23
GB2431308A true GB2431308A (en) 2007-04-18

Family

ID=35451655

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0520801A Withdrawn GB2431308A (en) 2005-10-13 2005-10-13 3D manual segmentation by placement of constraint points

Country Status (1)

Country Link
GB (1) GB2431308A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007050890A1 (en) * 2007-10-24 2009-04-30 Siemens Ag Medical image data reprocessing method for segmentation of medical image data structure, involves calculating surface of structures based on calculated contour, and continuing surface calculation based on image data and calculated contour
WO2009101577A2 (en) * 2008-02-15 2009-08-20 Koninklijke Philips Electronics N.V. Interactive selection of a region of interest and segmentation of image data
EP2192553A1 (en) * 2008-11-28 2010-06-02 Agfa HealthCare N.V. Method and apparatus for determining a position in an image, in particular a medical image
EP3220357A3 (en) * 2016-03-15 2018-01-10 Siemens Healthcare GmbH Model-based generation and display of three-dimensional objects
WO2018222471A1 (en) * 2017-05-31 2018-12-06 General Electric Company Systems and methods for displaying intersections on ultrasound images

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1076318A1 (en) * 1999-08-13 2001-02-14 The John P. Robarts Research Institute Prostate boundary segmentation from 2d and 3d ultrasound images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1076318A1 (en) * 1999-08-13 2001-02-14 The John P. Robarts Research Institute Prostate boundary segmentation from 2d and 3d ultrasound images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Turk, G., O'Brien, J., "Modelling with Implicit Surfaces that Interpolate", October 2002, ACM Transactions on Graphics, Vol. 21, No. 4. *
Wang, Y., Neale, H., Downey, D., Fenster, A., "Semiautomatic three-dimensional segmentation of the prostate using two-dimensional ultrasound images", May 2003, Medical Physics, 30(5), pp 887-897 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007050890A1 (en) * 2007-10-24 2009-04-30 Siemens Ag Medical image data reprocessing method for segmentation of medical image data structure, involves calculating surface of structures based on calculated contour, and continuing surface calculation based on image data and calculated contour
WO2009101577A2 (en) * 2008-02-15 2009-08-20 Koninklijke Philips Electronics N.V. Interactive selection of a region of interest and segmentation of image data
WO2009101577A3 (en) * 2008-02-15 2010-01-28 Koninklijke Philips Electronics N.V. Interactive selection of a region of interest and segmentation of image data
EP2192553A1 (en) * 2008-11-28 2010-06-02 Agfa HealthCare N.V. Method and apparatus for determining a position in an image, in particular a medical image
US8471846B2 (en) 2008-11-28 2013-06-25 Agfa Healthcare, Nv Method and apparatus for determining medical image position
EP3220357A3 (en) * 2016-03-15 2018-01-10 Siemens Healthcare GmbH Model-based generation and display of three-dimensional objects
US10733787B2 (en) 2016-03-15 2020-08-04 Siemens Healthcare Gmbh Model-based generation and representation of three-dimensional objects
WO2018222471A1 (en) * 2017-05-31 2018-12-06 General Electric Company Systems and methods for displaying intersections on ultrasound images
US10499879B2 (en) 2017-05-31 2019-12-10 General Electric Company Systems and methods for displaying intersections on ultrasound images

Also Published As

Publication number Publication date
GB0520801D0 (en) 2005-11-23

Similar Documents

Publication Publication Date Title
US6801643B2 (en) Anatomical visualization system
US7773786B2 (en) Method and apparatus for three-dimensional interactive tools for semi-automatic segmentation and editing of image objects
US7149333B2 (en) Anatomical visualization and measurement system
US8214756B2 (en) User interface for iterative image modification
US6175655B1 (en) Medical imaging system for displaying, manipulating and analyzing three-dimensional images
JP4584575B2 (en) Image processing method for interacting with 3D surface displayed in 3D image
US20050228250A1 (en) System and method for visualization and navigation of three-dimensional medical images
Konrad-Verse et al. Virtual resection with a deformable cutting plane.
Kretschmer et al. Interactive patient-specific vascular modeling with sweep surfaces
Hong et al. Implicit reconstruction of vasculatures using bivariate piecewise algebraic splines
CN106716496A (en) Visualizing volumetric image of anatomical structure
GB2431308A (en) 3D manual segmentation by placement of constraint points
EP0836729B1 (en) Anatomical visualization system
JP2018526057A (en) Interactive mesh editing
Rhee et al. Scan-based volume animation driven by locally adaptive articulated registrations
EP3989172A1 (en) Method for use in generating a computer-based visualization of 3d medical image data
Elliott et al. An object-oriented system for 3D medical image analysis
Bornik et al. Interactive editing of segmented volumetric datasets in a hybrid 2D/3D virtual environment
Gao et al. Three dimensional surface warping for plastic surgery planning
Kuhn Aim project a2003: Computer vision in radiology (covira)
Montilla et al. Computer assisted planning using dependent texture mapping and multiple rendering projections in medical applications
Lagos Fast contextual view generation and region of interest selection in 3D medical images via superellipsoid manipulation, blending and constrained region growing
Fong et al. Development of a virtual reality system for Hepatocellular Carcinoma pre-surgical planning
Fletcher et al. Computer-Generated Modelling in Surgery
Neumann Localization and Classification of Teeth in Cone Beam Computed Tomography using 2D CNNs

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)