WO2001022352A1 - Method and apparatus for image registration using large deformation diffeomorphisms on a sphere - Google Patents

Method and apparatus for image registration using large deformation diffeomorphisms on a sphere Download PDF

Info

Publication number
WO2001022352A1
WO2001022352A1 PCT/US2000/025971 US0025971W WO0122352A1 WO 2001022352 A1 WO2001022352 A1 WO 2001022352A1 US 0025971 W US0025971 W US 0025971W WO 0122352 A1 WO0122352 A1 WO 0122352A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
template image
target
large deformation
points
Prior art date
Application number
PCT/US2000/025971
Other languages
French (fr)
Inventor
Michael I. Miller
Sarang C. Joshi
Muge M. Bakircioglu
Original Assignee
Miller Michael I
Joshi Sarang C
Bakircioglu Muge M
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Miller Michael I, Joshi Sarang C, Bakircioglu Muge M filed Critical Miller Michael I
Priority to EP00965280A priority Critical patent/EP1222608A1/en
Priority to AU76019/00A priority patent/AU7601900A/en
Publication of WO2001022352A1 publication Critical patent/WO2001022352A1/en

Links

Classifications

    • G06T3/153
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • the present invention relates to image processing systems and methods, and more particularly to image registration systems that combine two or more images into a composite image; in particular the fusion of anatomical manifold-based knowledge with volume imagery via large deformation mapping which supports both kinds of information simultaneously, as well as individually, and which can be implemented on a rapid convolution FFT based computer system.
  • Image registration involves combining two or more images, or selected points from the images, to produce a composite image containing data from each of the registered images. During registration, a transformation is computed that maps related points among the combined images so that points defining the same structure in each of the combined images are correlated in the composite image.
  • Each of the coefficients, k t is assumed
  • mapping relationship u(x) is extended from the set of N landmark points to the continuum using a linear quadratic form regularization optimization of the equation:
  • Timoshenko Theory of Elasticity, McGraw-Hill, 1934 (hereinafter referred to as Timoshenko) and R.L. Bisplinghoff, J.W. Marr, and T.H.H. Pian, Statistics ofDeformable Solids, Dover Publications, Inc., 1965 (hereinafter referred to as Bisplinghoff).
  • Others have used this operator in their work, see e.g., Amit, U. Grenander, and M. Piccioni, "Structural image restoration through deformable templates," J. American Statistical Association.
  • a distance measure represented by the expression D(u)
  • D(u) represents the distance between a template T(x) and a target image S(x).
  • the distance measure D(u) measuring the disparity between imagery has various forms, e.g., the Gaussian squared error distance J a correlation distance, or a
  • fusion approaches involve small deformation mapping coordinates x ⁇ ⁇ of one set of imagery to a second set of imagery.
  • Other techniques include the mapping of predefined
  • the distance measure changes depending upon whether landmarks or imagery are being matched.
  • the field u(x) specifying the mapping h is extended from the 5 set of points ⁇ .t, ⁇ identified in the target to the points ⁇ y,- ⁇ measured with Gaussian error co- variances ⁇ ,- :
  • the second approach is purely volume image data driven, in which the volume
  • h(-) - - «( ⁇ )
  • u arg min (5)
  • the data function D(u) measures the disparity between imagery and has various forms. Other distances are used besides the Gaussian squared error distance, including correlation distance, Kullback Liebler distance, and others.
  • small deformation methods provide geometrically meaningful deformations under conditions where the imagery being matched are small, linear, or affine changes from one image to the other.
  • Small deformation mapping does not allow the automatic calculation of tangents, curvature, surface areas, and geometric properties of the imagery.
  • Fig. 9 shows an oval template image with several landmarks highlighted.
  • Fig. 10 shows a target image that is greatly deformed from the template image. The target image is a largely deformed oval that has been twisted.
  • Fig. 1 1 shows the results of image
  • the diffeomorphic transformations constructed are of high dimensions having, for example a dimension greater than 12 of the Affine transform up-to the order of the number of voxels in the volume.
  • a transformation is diffeomorphic if the transformation from the template to the target is one-to-one, onto,
  • a transformation is said to be one-to-one if no two distinct points in the template are mapped to the same point in the target.
  • a transformation is said to be onto if every point in the target is mapped from a point in the template. The importance of generating diffeomorphisms is that tangents,
  • Fig. 12 illustrates the image mapping illustrated in Fig. 1 1 using diffeomorphic transformation.
  • the present invention overcomes the limitations of the conventional techniques of
  • image registration by providing a methodology which combines, or fuses, some aspects of techniques where an individual with expertise in the structure of the object represented in the images labels a set of landmarks in each image that are to be registered and techniques that use mathematics of small deformation multi-target registration, which is purely image data driven.
  • a large deformation transform is computed using the selected coordinate frame, a manifold landmark transformation operator, and at least one manifold landmark transformation boundary value.
  • the large deformation transform relates
  • Fig. 1 is a target and template image of an axial section of a human head with 0- dimensional manifolds
  • Fig. 2 is schematic diagram illustrating an apparatus for registering images in accordance with the present invention
  • Fig. 3 is a flow diagram illustrating the method of image registration according to the
  • Fig. 4 is a target and a template image with 1-dimensional manifolds
  • Fig. 5 is a target and a template image with 2-dimensional manifolds
  • Fig. 6 is a target and a template image with 3-dimensional manifolds
  • Fig. 7 is sequence of images illustrating registration of a template and target image.
  • Fig. 8 is a flow diagram illustrating the computation of a fusing transform
  • Fig. 9 is an oval template image which has landmark points selected and highlighted
  • Fig. 10 is a deformed and distorted oval target image with corresponding landmark points highlighted and selected;
  • Fig. 1 1 is an image matching of the oval target and template images; and Fig. 12 is an image matching using diffeomorphism.
  • Fig. 1 shows two axial views of a human head.
  • template image 100 contains points 102, 104, and 1 14 identifying
  • Target image 120 contains points 108, 110, 116, corresponding respectively to template image points 102, 104, 114, via vectors 106, 1 12, 1 18, respectively.
  • Fig. 2 shows apparatus to carry out the preferred embodiment of this invention.
  • medical imaging scanner 214 obtains the images show in Fig. 1 and stores them on a computer
  • CPU central processing unit
  • a parallel computer platform having multiple CPUs is also a suitable hardware platform for the present invention, including, but not limited to, massively parallel machines and workstations with multiple processors.
  • Computer memory 206 which is connected to a computer central processing unit (CPU) 204.
  • CPU central processing unit
  • One of ordinary skill in the art will recognize that a parallel computer platform having multiple CPUs is also a suitable hardware platform for the present invention, including, but not limited to, massively parallel machines and workstations with multiple processors.
  • Computer memory 206 which is connected to a computer central processing unit (CPU) 204.
  • CPU central processing unit
  • CPU 204 can be directly connected to CPU 204, or this memory can be remotely connected through
  • Registering images 100, 120 unifies registration based on landmark deformations and image data transformation using a coarse-to-fine approach.
  • the highest dimensional transformation required during registration is computed from the solution of a sequence of lower dimensional problems driven by successive refinements.
  • the method is based on information either provided by an operator,
  • an operator using pointing device 208, moves cursor 210 to select points 102, 104, 1 14 in Fig. 1, which are then displayed on a computer monitor 202 along with images 100, 120.
  • Selected image points 102, 104, and 1 14 are 0- dimensional manifold landmarks.
  • CPU 204 computes a first transform relating the manifold landmark points in template image 100 to their corresponding image points in target image 120.
  • a second CPU 204 transform is computed by fusing the first
  • the operator can select an equation for the distance measure several ways including, but not limited to, selecting an
  • Registration is completed by CPU 204 applying the second computed transform to all points in the template image 100.
  • the transforms, boundary values, region of interest, and distance measure can be defaults read from memory or determined automatically.
  • Fig. 3 illustrates the method of this invention in operation.
  • First an operator defines a set of N manifold landmark points x t where /,..., N, represented by the variable M, in the template image (step 300). These points should correspond to points that are easy to identify in the target image.
  • each landmark point, x t , in the template image is a corresponding point y i in the target image.
  • the operator therefore next identifies the corresponding points, y ( , in the target image are identified (step 310).
  • the nature of this process means that the corresponding points can only be identified within some degree of accuracy.
  • This mapping between the template and target points can be specified with a resolution having a Gaussian error of
  • K.(x,x) is the Green's function of a volume landmark transformation operator L 2 (assuming L is self-adjoint):
  • the operator may select a region of interest in the target image. Restricting the computation to a relatively small region of interest reduces both computation and storage requirements because transformation is computed only over a subregion of interest. It is also possible to select a region of interest in the target image. Restricting the computation to a relatively small region of interest reduces both computation and storage requirements because transformation is computed only over a subregion of interest. It is also possible to select a region of interest in the target image.
  • the entire image is the desired region of interest. In other applications, there may be default regions of interest that are automatically identified.
  • the number of computations required is proportional to the number of points in the region of interest, so the computational savings equals the ratio of the total number of points in
  • N points with a region of interest having M points is a factor o ⁇ N/M.
  • the computation time and the data storage are reduced by a factor of eight.
  • performing the computation only over the region of interest makes it necessary only to store a subregion, providing a data storage savings for the template image, the target image, and the transform values.
  • CPU 204 computes a transform that embodies the mapping relationship between these two sets of points (step 350).
  • This transform can be estimated using Bayesian optimization, using the following equation:
  • A is a 3 x 3 matrix
  • b [b b 2 , b ⁇ ] is a 3 x 1 vector
  • the foregoing steps of the image registration method provide a coarse matching of a template and a target image.
  • Fine matching of the images requires using the full image data and the landmark information and involves selecting a distance measure by solving a synthesis equation that simultaneously maps selected image landmarks in the template and target images and matches all image points within a region of interest.
  • An example of this synthesis equation is: arg min ⁇ f ⁇ T(x - u(x)) - S(x) ⁇ 2 dx + f ⁇ Lu ⁇ 2 ⁇ ⁇ ⁇ y ' ⁇ ⁇ ⁇ '° ' " ( ⁇ )
  • the operator L in equation (1 1) may be the same operator used in equation (9), or alternatively, another operator may be used with a different set of boundary conditions.
  • the distance measure in the first term measures the relative position of points in the target image with respect to points in the template image.
  • this synthesis equation uses a quadratic distance measure, one of ordinary skill in the art will recognize that there are other suitable distance measures.
  • CPU 204 then computes a second or fusing transformation (Step 370) using the synthesis equation relating all points within a region of interest in the target image to all corresponding points in the template image.
  • the synthesis equation is defined so that the resulting transform incorporates, or fuses, the mapping of manifold landmarks to corresponding target image points determined when calculating the first transform.
  • the computation using the synthesis equation is accomplished by solving a sequence of optimization problems from coarse to fine scale via estimation of the basis coefficients ⁇ .
  • This is analogous to multi-grid methods, but here the notion of refinement from coarse to fine is accomplished by increasing the number of basis components d. As the number of basis functions increases, smaller and smaller variabilities between the template and target are accommodated.
  • the basis coefficients are determined by gradient descent, i.e.,
  • is a fixed step size and ⁇ are the eigenvalues of the eigenvectors ⁇ k .
  • Equation (13) is then used to estimate the new values of the basis coefficients ⁇ k n'l) given the current estimate of the displacement field i "'(x) (step 804).
  • Equation (15) is then used to compute the new estimate of the displacement field ⁇ n> (x) given the current estimate of the basis coefficients ⁇ k (n> (step 806).
  • the next part of the computation is to decide whether or not to increase the number d of basis functions ⁇ used to represent the transformation (step 80S). Increasing the number of basis functions allows more deformation. Normally, the algorithm is started with a small number of basis functions corresponding to low frequency eigen functions and then on defined iterations the number of frequencies is increased by one (step 810). This coarse-to-fine strategy matches larger structures before smaller structures. The preceding computations (steps 804-810) are repeated until the computation has converged or the maximum number of iterations is reached (step 812). The final displacement field is then used to transform the template image (step 814).
  • CPU 204 uses this transform to register the template image with the target image (step 380).
  • the spectrum of the second transformation, h is highly concentrated around zero. This means that the spectrum mostly contains low frequency components.
  • the transformation can be represented by a subsampled version provided that the sampling frequency is greater than the Nyquist frequency of the transformation.
  • the computation may be accelerated by computing the transformation on a coarse grid and extending it to the full voxel lattice e.g., in the case of 3D images, by interpolation.
  • the computational complexity of the algorithm is proportional to the dimension of the lattice on which the transformation is computed. Therefore, the computation acceleration equals the ratio of the full voxel lattice to the coarse computational lattice.
  • Fig. 4 shows a template image 400 of a section of a brain with 1- dimensional manifolds 402 and 404 corresponding to target image 406 1-dimensional manifolds 408 and 410 respectively.
  • Fig. 5 shows a template image 500 of a section of a brain with 2-dimensional manifold 502 corresponding to target image 504 2-dimensional manifold 506.
  • Fig. 6 shows a template image 600 of a section of a brain with 3-dimensional manifold 602 corresponding to target image 604 3-dimensional manifold 606.
  • M(3),dS is the Lebesgue measure on P
  • dS is the surface measure on M(2)
  • dS is the line measure on Mfl
  • M(0), dS is the atomic measure.
  • the Fredholm integral equation degenerates into a summation given by equation (10).
  • step 370 It is also possible to compute the transform (step 370) with rapid convergence by solving a series of linear minimization problems where the solution to the series of linear problems converges to the solution of the nonlinear problem. This avoids needing to solve the nonlinear minimization problem directly.
  • the computation converges faster than a direct solution of the synthesis equation because the basis coefficients ⁇ k are updated with optimal step sizes.
  • step 370 of Fig. 3 computing the registration transform fusing landmark and image data, is implemented using the conjugate gradient method, the computation will involve a series of inner products.
  • the FFT exploits the structure of the eigen functions and the computational efficiency of the FFT to compute these inner-products.
  • one form of a synthesis equation for executing Step 370 of Fig. 3 will include the following three terms:
  • a distance function used to measure the disparity between images is the Gaussian square error distance j
  • distance functions such as the correlation distance, or the Kullback Liebler distance, can be written in the form ⁇ D(T(x-u(x)) , S(x))dx.
  • D(.,.) is a distance function relating points in the template and target images.
  • the displacement field is assumed to have the form:
  • the basis coefficients ⁇ ⁇ . are determined by gradient descent, i.e.,
  • D ' (.,.) is the derivative with respect to the first argument.
  • D ' (.,.) is the derivative with respect to the first argument.
  • each of the inner-products in the algorithm if computed directly, would have a computational complexity of the order ( ⁇ 3 ) 2 .
  • the overall complexity of image registration is also ( ⁇ 3 ) 2 .
  • each of the FFTs proposed has a computational complexity on the order of ⁇ 3 log, N 3 .
  • boundary conditions such as the Dirichlet, Neumann, or mixed Dirichlet and Neumann boundary conditions are also suitable.
  • the following equation is used in an embodiment of the present invention using one set of mixed Dirichlet and Neumann boundary conditions:
  • Modifying boundary conditions requires modifying the butterflies of the FFT from complex exponentials to appropriate sines and cosines.
  • template image 700 In Fig. 7, four images, template image 700, image 704, image 706, and target image 708, illustrate the sequence of registering a template image and a target image.
  • Template image 700 has 0-dimensional landmark manifolds 702. Applying the landmark manifold transform computed at step 350 in Fig. 3 to image 700 produces image 704. Applying a second transform computed using the synthesis equation combining landmark manifolds and image data to image 700 produces image 706.
  • Image 706 is the final result of registering template image 700 with target image 708.
  • Landmark manifold 710 in image 708 corresponds to landmark manifold 702 in template image 700.
  • the large deformation maps h: ⁇ ⁇ ⁇ are constructed by introducing the time variable,
  • the distance between the target and template imagery landmarks is preferably defined as .
  • a preferable diffeomo hism is the minimizer of Eqns. 40, 41 with D(u(T)) the landmark distance:
  • ⁇ (x,T) f T (I -Vu (x,t))v(x,t)dt ( 43)
  • a diffeomo ⁇ hic map is computed using an image transformation operator and image transformation boundary values relating the template image to the target image. Subsequently the template image is registered with the target image using the diffromo ⁇ hic map.
  • ⁇ (x,T) f T (I - j ⁇ (x,t))v(x,t)dt and (48)
  • a method for registering images consistent with the present invention preferably includes the following steps:
  • STEP 2 Solve optimization via sequence of optimization problems from coarse to fine scale via re-estimation of the basis coefficients ⁇ v , analogous to multi-grid methods with the notion of refinement from coarse to fine accomplished by increasing the number of basis components. For each v k ,
  • the differential operator L can be chosen to be any in a class of linear differential operators; we have used operators of the form (-a ⁇ - bvv +ciy, p ⁇ 1. The operators are 3 3 matrices
  • transformations may be chosen, including the affine motions, rigid motions generated from subgroups of the generalized linear group, large deformation landmark transformations which are diffeomo ⁇ hisms, or the high dimensional large deformation image matching transformation (the dimension of the transformations of the vector fields are listed in increasing order). Since these are all diffeomo ⁇ hisms, they can be composed.
  • the particle flows ⁇ (t) are defined by the velocities v(-) according to the fundamental O.D.E.
  • N N v(x,t) ⁇ Kt ⁇ (ts ) ⁇ (K( ⁇ (t)y ⁇ (x t) (62)
  • the method of the present embodiment utilizes the Largrangian positions.
  • STEP 4 After stopping, then compute the optimal velocity field using equation 62 and transform using equation 64.
  • h(-, t) ⁇ - ⁇
  • h(x, t) x - u(x, t)
  • the transformation and velocity fields are related via the O.D.E d duu((xx.,tt)) v(x,t) + Vu(x,t)v(x,t),t ⁇ [0,7J
  • the deformation fields are di generated from the velocity fields assumed to be piecewise constant over quantized time
  • the body force b(x-u(x,t)) is given by the variation of the distance D(u) with respect to the field at time t.
  • the PDE is solved numerically (G. E. Christensen, R. D. Rabbitt, and M. I. Miller, "Deformable templates using large deformation kinematics," 7E££ Transactions on Image Processing, 5(10): 1435- 1447, October 1996 for details (hereinafter referred to as Christensen).
  • ⁇ ( ', ) J i f ⁇ a ⁇ d j. r ⁇ ( - " )( v,- )) - ⁇ t ⁇ v/ oi * - «fc ' • e - V U ( ⁇ ) ⁇ W d t
  • Section 5 The implementation presented in Section 5 is modified for different boundary conditions by modifying the butterflys of the FFT from complex exponentials to appropriate sines and cosines.
  • a method for registering images using large deformation diffeomo ⁇ hisms on a sphere comprises selecting a coordinate frame suitable for spherical geometries and registering the target and template image using a large deformation diffeomo ⁇ hic transform in the selected coordinate frame. While there are many applications that benefit from an image registration technique adapted to spherical geometries, one such example, registering brain images, is discussed herein to illustrate the technique. One skilled in the art will recognize that other imaging applications are equally suited for this technique, such as registering images of other anatomical regions and registering non-anatomical imagery containing spherical regions of interest.
  • An application of brain image registration is the visualization of conical studies.
  • Current methods of visualizing cortical brain studies use flat maps. Although flat maps bring the buried cortex into full view and provide compact representations, limitations are introduced by the artificial cuts needed to preserve topological relationships across the cortical surface. Mapping the cortical hemisurface to a sphere, however, allows points on the surface to be represented by a two-dimensional coordinate system that preserves the topology. Spherical maps allow visualization of the full extent of sulci and the buried cortex within the folds.
  • An embodiment consistent with the present invention generates large deformation diffeomo ⁇ hisms on the sphere S 2 which has a one-to-one correspondence with a reconstructed cortical surface.
  • the final transformed coordinate map is defined as ⁇ ( ,1) ⁇
  • the template image and target image spheres are characterized by the set of landmarks ⁇ xicide,y ⁇ , n - 1,2, ..N ⁇ S 2 .
  • Diffeomo ⁇ hic matches are constructed by forcing the velocity fields to be associated with quadratic energetics on S 2 x [0, 1].
  • the diffeomo ⁇ hic landmark matching is constructed to minimize a running smoothness energy on the velocity field as well as the end point distance between the template and target landmarks.
  • I is a differential operator
  • ⁇ (x,y) is the solid angle between points x,y on the sphere
  • d ⁇ (x) sin ⁇ d ⁇ d ⁇ is the surface measure.
  • the coordinate frames are given by — and according to :
  • each hemisphere of the brain is mapped to a sphere individually, and the area they are attached to each other is transformation invariant. For some applications, this conforms to an anatomical constraint, e.g., the point at which the brain attached to the spine.
  • the location producing zero-valued coordinate frames can be user selected by computing a rigid alignment to move this location in the image.
  • One example of such an alignment involves aligning the sets of landmarks by computing a rigid transform (which, in a spherical coordinate frame, is a rotation) and then applying another rigid transform to ensure that the fixed points are aligned correctly.
  • An embodiment of the present invention uses stereographic projection to parameterize the unit sphere corresponding to the right hemisphere of the brain, with the center shifted to (-1,0,0) and take the shadow (u,v) of each point in the yz plane while shining a light from point (-2,0,0). Note that the shadow of the point from where the light source is located is a infinity, and this becomes the point where the coordinate frames vanish. The location of the light source and the projection plane can be adjusted to place the transformation invariant point as needed.
  • P denote the stereographic projection from S 2 ⁇ (-2,0,0) to (u,v) ⁇ JF 2 :
  • I6 «(9,y) (4v( ⁇ ,uQ 2 -4a( ⁇ , ⁇ ) 2 ⁇ 16) -8 «( ⁇ , ⁇ )v(8, ⁇ ) , ⁇ ⁇ V( ⁇ , ⁇ )+v 2 ( ⁇ , ⁇ ) + 4) 2 ' ( u ( ⁇ , ⁇ ) 2 ⁇ v( ⁇ >V ) 2 ⁇ 4) 2 ( ⁇ , ⁇ ) 2 + v( ⁇ , ⁇ ) 2 + 4) 2 l8
  • a similar parameterization can be obtained for the left hemisphere by shifting the center to (1,0,0) and shining the light from (2,0,0) which becomes the transformation invariant point. Accordingly, an appropriate coordinate frame for images containing objects having spherical geometry is generated for subsequent image registration.
  • the covariance operator is then computed using spherical harmonics.
  • the covariance operator (which is the Green's function squared of the Laplacian operator) will determine the solution to the diffeomo ⁇ hic matching (see equation 93 below).
  • There are (2n ⁇ 1) spherical harmonics of order n for each n and they are of the even and odd harmonic form with 0 ⁇ m ⁇ n: ( ⁇ , ⁇ ) k ⁇ P ⁇ cosy )cosm ⁇ (88)
  • An embodiment of the present invention uses a gradient algorithm to register the images o
  • ⁇ [t ; . - 1] 1 for t k - 1, and 0 otherwise, ⁇ is the gradient step.
  • v, v (m ⁇ , and for all x ⁇ S 2
  • FIG. 2 shows an apparatus to carry out an embodiment of this invention.
  • a medial imaging scanner 214 obtains image 100 and 120 and stores them in computer memory 206 which is connected to computer control processing unit (CPU) 204.
  • CPU computer control processing unit
  • CPU 204 One of the ordinary skill in the art will recognize that a parallel computer platform having multiple CPUs is also a suitable hardware platform for the present invention, including, but not limited to, massively parallel machines and workstations with multiple processors.
  • Computer memory 206 can be directly connected to CPU 204, or this memory can be remotely connected through a communications network.
  • the methods described herein use information either provided by an operator, stored as defaults, or determined automatically about the various substructures of the template and the target, and varying degrees of knowledge about these substructures derived from anatomical imagery, acquired from modalities like CT, MRI, functional MRI, PET, ultrasound, SPECT, MEG, EEG, or cryosection.
  • an operator can guide cursor 210 using pointing device 208 to select in image 100.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

An apparatus and method for image registration of a template image (100) with a target image (120) with large deformation. The apparatus and method involve computing a large deformation transform based on landmark manifolds (350), image data or both. The apparatus and method are capable of registering (380) images with a small number of landmark points. Registering the images is accomplished by applying the large deformation transform.

Description

METHOD AND APPARATUS FOR IMAGE REGISTRATION USING LARGE DEFORMATION DIFFEOMORPHISMS ON A SPHERE
Related Applications
This is a continuation-in-part of prior U.S. Patent Application Serial No. 09/186,359 filed November 5, 1998. This continuation-in-part claims priority to U.S. Patent Application Serial No. 09/186,359 and incorporates the same by reference herein. This continuation-in-part also claims priority to U.S. provisional application 60/155,141 filed September 22, 1999 and incorporates the same by reference herein.
Government Rights
This work was supported in part by the following U.S. Government grants: NIH grants RR01380 and R01-MH52138-01 Al; ARO grant DAAL-03-86-K-0110; and NSF grant BIR-9424 264. The U.S. Government may have certain rights in the invention.
Technical Field
The present invention relates to image processing systems and methods, and more particularly to image registration systems that combine two or more images into a composite image; in particular the fusion of anatomical manifold-based knowledge with volume imagery via large deformation mapping which supports both kinds of information simultaneously, as well as individually, and which can be implemented on a rapid convolution FFT based computer system.
Background Art
Image registration involves combining two or more images, or selected points from the images, to produce a composite image containing data from each of the registered images. During registration, a transformation is computed that maps related points among the combined images so that points defining the same structure in each of the combined images are correlated in the composite image.
Currently, practitioners follow two different registration techniques. The first requires that an individual with expertise in the structure of the object represented in the images label a set of landmarks in each of the images that are to be registered. For example, when registering two MRI images of different axial slices of a human head, a physician may label points, or a
contour surrounding these points, corresponding to the cerebellum in two images. The two images are then registered by relying on a known relationship among the landmarks in the two brain images. The mathematics underlying this registration process is known as small deformation multi-target registration.
In the previous example of two brain images being registered, using a purely operator-
driven approach, a set of N landmarks identified by the physician, represented by x„ where = / . . . .iV, are defined within the two brain coordinate systems. A mapping relationship, mapping the N points selected in one image to the corresponding N points in the other image, is defined by the equation ufx - kt, where i = 1 . . . .N. Each of the coefficients, kt, is assumed
known.
The mapping relationship u(x) is extended from the set of N landmark points to the continuum using a linear quadratic form regularization optimization of the equation:
ILuγ
( 1) subject to the boundary constraints u(x = k„. The operator L is a linear differential operator. This linear optimization problem has a closed form solution. Selecting L=αV2+βV(V-) gives rise to small deformation elasticity.
For a description of small deformation elasticity see S. Timoshenko, Theory of Elasticity, McGraw-Hill, 1934 (hereinafter referred to as Timoshenko) and R.L. Bisplinghoff, J.W. Marr, and T.H.H. Pian, Statistics ofDeformable Solids, Dover Publications, Inc., 1965 (hereinafter referred to as Bisplinghoff). Selecting L=V: gives rise to a membrane or Laplacian model. Others have used this operator in their work, see e.g., Amit, U. Grenander, and M. Piccioni, "Structural image restoration through deformable templates," J. American Statistical Association. 86(414):376-387, June 1991, (hereinafter referred to as Amit) and R. Szeliski, Bayesian Modeling of Uncertainty in Low-Level Vision, Kluwer Academic Publisher, Boston, 1989 (hereinafter referred to as Szeliski) (also describing a bi-harmonic approach). Selecting L=V gives a spline or biharmonic registration method. For examples of applications using this operator see Grace Wahba, "Spline Models for Observational Data ," Regional Conference
Series in Applied Mathematics. SIAM, 1990, (hereinafter referred to as Whaba) and F.L. Bookstein, The Measurement of Biological Shape and Shape Change, volume 24, Springer-
Verlag: Lecture Notes in Biomathematics, New York, 1978 (hereinafter referred to as
Bookstein).
The second currently-practiced technique for image registration uses the mathematics of small deformation multi-target registration and is purely image data driven. Here, volume
based imagery is generated of the two targets from which a coordinate system transformation is constructed. Using this approach, a distance measure, represented by the expression D(u), represents the distance between a template T(x) and a target image S(x). The optimization equation guiding the registration of the two images using a distance measure is:
u = arg min f \\Lu'\2 + D(u) r )
The distance measure D(u) measuring the disparity between imagery has various forms, e.g., the Gaussian squared error distance J
Figure imgf000006_0001
a correlation distance, or a
Kullback Liebler distance. Registration of the two images requires finding a mapping that minimizes this distance.
Other fusion approaches involve small deformation mapping coordinates x ε Ω of one set of imagery to a second set of imagery. Other techniques include the mapping of predefined
landmarks and imagery, both taken separately such as in the work of Bookstein, or fused as covered via the approach developed by Miller- Joshi-Christensen-Grenander the '212 patent described in U.S. Patent No. 6,009,212 (hereinafter "the' 212 patent") herein incorporated by reference mapping coordinates x ε Ω of one target to a second target. The existing state of the art for small deformation matching can be stated as follows:
Small Deformation Matching: Construct h(x) - x - u(x) according to the minimization
of the distance D(u) between the template and target imagery subject to the smoothness penalty defined by the linear differential operator!:
h(-) = - u(-) where ύ = arg min j \\Lu |2 - ∑ D(u). (3)
The distance measure changes depending upon whether landmarks or imagery are being matched. 1. Landmarks alone. The first approach is purely operator driven in which a set of point landmarks .t,-, = 1 , . . . .Nare defined within the two brain coordinate systems by, for example, an anatomical expert, or automated system from which the mapping is assumed known: u(x) = :,., = 1, . . . , N. The field u(x) specifying the mapping h is extended from the 5 set of points {.t,} identified in the target to the points {y,-} measured with Gaussian error co- variances ∑,- :
ύ = argm /
Figure imgf000007_0001
÷ ∑ (γ. - x. - .))' ∑'1 (y. - x. - u(x )τ (4) ι = l
Here (-)r denotes transpose of the 3 x 1 real vectors, and £ is a linear differential
operator giving rise to small deformation elasticity (see Timoshenko and Bisplinghoff), the membrane of Laplacian model (see Amit and Szeliski), bi-harmonic (see Szeliski), and many of 10 the spline methods (see Wahba and Bookstein). This is a linear optimization problem with closed form solution.
2. The second approach is purely volume image data driven, in which the volume
based imagery is generated of the two targets from which the coordinate system transformation
is constructed. A distance measure between the two images being registered I0, 1, is defined as [5 D(u) =
Figure imgf000007_0002
The corresponding optimization is:
h(-) = - - «() where u = arg min
Figure imgf000007_0003
(5) The data function D(u) measures the disparity between imagery and has various forms. Other distances are used besides the Gaussian squared error distance, including correlation distance, Kullback Liebler distance, and others.
3. The algorithm for the transformation of imagery I0 into imagery I, has landmark and volume imagery fused in the small deformation setting as in the '212 patent. Both sources of information are combined into the small deformation registration:
u = αrgmin \ \Lu\ + D(u) +2^ u ,=ι σ2 (6) i
Although small deformation methods provide geometrically meaningful deformations under conditions where the imagery being matched are small, linear, or affine changes from one image to the other. Small deformation mapping does not allow the automatic calculation of tangents, curvature, surface areas, and geometric properties of the imagery. To illustrate the
mapping problem Fig. 9 shows an oval template image with several landmarks highlighted. Fig. 10 shows a target image that is greatly deformed from the template image. The target image is a largely deformed oval that has been twisted. Fig. 1 1 shows the results of image
matching when the four corners are fixed, using small deformation methods based on static quadratic form regularization. These figures illustrate the distortion which occurs with small deformation linear mapping when used with landmark points which define a motion
corresponding to large deformation. As can be seen in Fig. 1 1, landmarks defined in the template image often map to more than one corresponding point in the target image. Large deformation mapping produces maps for image registration in which the goal is to find the one-to-one, onto, invertible, differentiable maps h (henceforth termed diffeomorphisms) from the coordinates .r ε Ω of one target to a second target under the mapping
h : x - h(x) = x - u(x), x s Ω (7)
To accommodate very fine variations in anatomy the diffeomorphic transformations constructed are of high dimensions having, for example a dimension greater than 12 of the Affine transform up-to the order of the number of voxels in the volume. A transformation is diffeomorphic if the transformation from the template to the target is one-to-one, onto,
invertible, and both the transformation and it's inverse are differentiable. A transformation is said to be one-to-one if no two distinct points in the template are mapped to the same point in the target. A transformation is said to be onto if every point in the target is mapped from a point in the template. The importance of generating diffeomorphisms is that tangents,
curvature, surface areas, and geometric properties of the imagery can be calculated automatically. Fig. 12 illustrates the image mapping illustrated in Fig. 1 1 using diffeomorphic transformation.
Summary of the Invention
The present invention overcomes the limitations of the conventional techniques of
image registration by providing a methodology which combines, or fuses, some aspects of techniques where an individual with expertise in the structure of the object represented in the images labels a set of landmarks in each image that are to be registered and techniques that use mathematics of small deformation multi-target registration, which is purely image data driven.
Additional features and advantages of the invention will be set forth in the description which follows, and in part, will be apparent from the description, or may be learned by practicing the invention. The embodiments and other advantages of the invention will be realized and obtained by the method and apparatus particularly pointed out in the written description and the claims hereof as well as in the appended drawings.
To achieve these and other advantages and in accordance with the purpose of the invention, as embodied and broadly described, an embodiment consistent with the present
invention registers a template and a target image by defining a manifold landmark point in the template image, identifying a point in said target image corresponding to the defined manifold landmark point, and selecting a coordinate frame suitable for spherical geometries. According to an embodiment of the present invention, a large deformation transform is computed using the selected coordinate frame, a manifold landmark transformation operator, and at least one manifold landmark transformation boundary value. The large deformation transform relates
the manifold landmark point in the template image to the corresponding point in the target
image. The template and target image are registered using the large deformation transform in the selected coordinate frame. Another embodiment consistent with the present invention
registers a template image with a target image by selecting a coordinate frame suitable for spherical
geometries and registering the target and template image using a large deformation transform
in the selected coordinate frame. Both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Brief Description of the Drawings
The accompanying drawings provide a further understanding of the invention. They illustrate embodiments of the invention and, together with the description, explain the principles of the invention.
Fig. 1 is a target and template image of an axial section of a human head with 0- dimensional manifolds;
Fig. 2 is schematic diagram illustrating an apparatus for registering images in accordance with the present invention;
Fig. 3 is a flow diagram illustrating the method of image registration according to the
present invention; Fig. 4 is a target and a template image with 1-dimensional manifolds;
Fig. 5 is a target and a template image with 2-dimensional manifolds;
Fig. 6 is a target and a template image with 3-dimensional manifolds;
Fig. 7 is sequence of images illustrating registration of a template and target image; and
Fig. 8 is a flow diagram illustrating the computation of a fusing transform;
Fig. 9 is an oval template image which has landmark points selected and highlighted;
Fig. 10 is a deformed and distorted oval target image with corresponding landmark points highlighted and selected;
Fig. 1 1 is an image matching of the oval target and template images; and Fig. 12 is an image matching using diffeomorphism.
Detailed Description of the Invention 1. Method for Image Registration Using Both Landmark Based knowledge and Image Data
A method and system is disclosed which registers images using both landmark based knowledge and image data. Reference will now be made in detail to the present preferred embodiment of the invention, examples of which are illustrated in the accompanying drawings. To illustrate the principles of this invention, Fig. 1 shows two axial views of a human head. In this example, template image 100 contains points 102, 104, and 1 14 identifying
structural points (0-dimensional landmark manifolds) of interest in the template image. Target image 120 contains points 108, 110, 116, corresponding respectively to template image points 102, 104, 114, via vectors 106, 1 12, 1 18, respectively.
Fig. 2 shows apparatus to carry out the preferred embodiment of this invention. A
medical imaging scanner 214 obtains the images show in Fig. 1 and stores them on a computer
memory 206 which is connected to a computer central processing unit (CPU) 204. One of ordinary skill in the art will recognize that a parallel computer platform having multiple CPUs is also a suitable hardware platform for the present invention, including, but not limited to, massively parallel machines and workstations with multiple processors. Computer memory
206 can be directly connected to CPU 204, or this memory can be remotely connected through
a communications network.
Registering images 100, 120 according to the present invention, unifies registration based on landmark deformations and image data transformation using a coarse-to-fine approach. In this approach, the highest dimensional transformation required during registration is computed from the solution of a sequence of lower dimensional problems driven by successive refinements. The method is based on information either provided by an operator,
stored as defaults, or determined automatically about the various substructures of the template and the target, and varying degrees of knowledge about these substructures derived from anatomical imagery, acquired from modalities like CT, MRI, functional MRI, PET, ultrasound, SPECT, MEG, EEG, or cryosection.
Following this hierarchical approach, an operator, using pointing device 208, moves cursor 210 to select points 102, 104, 1 14 in Fig. 1, which are then displayed on a computer monitor 202 along with images 100, 120. Selected image points 102, 104, and 1 14 are 0- dimensional manifold landmarks.
Once the operator selects manifold landmark points 102, 104, and 1 14 in template image 100, the operator identifies the corresponding template image points 108, 1 10, 1 16. Once manifold landmark selection is complete, CPU 204 computes a first transform relating the manifold landmark points in template image 100 to their corresponding image points in target image 120. Next, a second CPU 204 transform is computed by fusing the first
transform relating selected manifold landmark points with a distance measure relating all
image points in both template image 100 and target image 120. The operator can select an equation for the distance measure several ways including, but not limited to, selecting an
equation from a list using pointing device 208, entering into CPU 204 an equation using
keyboard 212, or reading a default equation from memory 206. Registration is completed by CPU 204 applying the second computed transform to all points in the template image 100. Although several of the registration steps are described as selections made by an operator, implementation of the present invention is not limited to manual selection. For example, the transforms, boundary values, region of interest, and distance measure can be defaults read from memory or determined automatically.
Fig. 3 illustrates the method of this invention in operation. First an operator defines a set of N manifold landmark points xt where =/,..., N, represented by the variable M, in the template image (step 300). These points should correspond to points that are easy to identify in the target image.
Associated with each landmark point, xt, in the template image, is a corresponding point yi in the target image. The operator therefore next identifies the corresponding points, y( , in the target image are identified (step 310). The nature of this process means that the corresponding points can only be identified within some degree of accuracy. This mapping between the template and target points can be specified with a resolution having a Gaussian error of
variance σ2.
If a transformation operator has not been designated, the operator can choose a manifold landmark transformation operator, L, for this transformation computation. In this
embodiment, the Laplacian ( V = —^ -—^ — '- ) is used for the operator L. Similarly, the ex,"
operator can also select boundary values for the calculation corresponding to assumed boundary conditions, if these values have not been automatically determined or stored as default values. Here, infinite boundary conditions are assumed, producing the following equation for K, where K.(x,x) is the Green's function of a volume landmark transformation operator L2 (assuming L is self-adjoint):
Figure imgf000015_0001
Using circulant boundary conditions instead of infinite boundary conditions provides and embodiment suitable for rapid computation. One of ordinary skill in the art will recognize that other operators can be used in place of the Laplacian operator, such operators include, but are not limited to, the biharmonic operator, linear elasticity operator, and other powers of these
operators.
In addition, the operator may select a region of interest in the target image. Restricting the computation to a relatively small region of interest reduces both computation and storage requirements because transformation is computed only over a subregion of interest. It is also
possible that in some applications the entire image is the desired region of interest. In other applications, there may be default regions of interest that are automatically identified.
The number of computations required is proportional to the number of points in the region of interest, so the computational savings equals the ratio of the total number of points in
the image to the number of points in the region of interest. The data storage savings for an
image with N points with a region of interest having M points is a factor oϊN/M. For example, for a volume image of 256 x 256 x 256 points with a region of interest of 128 x 128 x 128 points, the computation time and the data storage are reduced by a factor of eight. In addition, performing the computation only over the region of interest makes it necessary only to store a subregion, providing a data storage savings for the template image, the target image, and the transform values.
Following the identification of template manifold landmark points and corresponding points in the target image, as well as selection of the manifold transformation operator, the boundary values, and the region of interest, CPU 204 computes a transform that embodies the mapping relationship between these two sets of points (step 350). This transform can be estimated using Bayesian optimization, using the following equation:
N arg m lmin f \Lu\2 + J Σ l - X: «(*,) I2
(9) σ:
the minimizer, u, having the form
;V ύ(x) = b ÷ Ax ÷ ∑ β^rx,.) (10) ι = l
where A is a 3 x 3 matrix, b = [b b2, b~] is a 3 x 1 vector,
Figure imgf000016_0001
is a 3 x 1 weighting vector.
The foregoing steps of the image registration method provide a coarse matching of a template and a target image. Fine matching of the images requires using the full image data and the landmark information and involves selecting a distance measure by solving a synthesis equation that simultaneously maps selected image landmarks in the template and target images and matches all image points within a region of interest. An example of this synthesis equation is: arg min γ f \T(x - u(x)) - S(x)\2 dx + f \Lu\2 ÷ ∑ { y' ~ < ÷ '° '" (π)
here the displacement field u is constrained to have the form d v x) = ∑ μ^W ÷ ∑ β^(^,) + -4x + b (12)
*=0 ι = l
with the variables β;, A, and b, computed at step 350 in Fig. 3. The operator L in equation (1 1) may be the same operator used in equation (9), or alternatively, another operator may be used with a different set of boundary conditions. The basis functions φ are the eigen functions of operators such as the Laplacian Lu = V2«, the bi-harmonic Lu = V 4u, linear elasticity Lu = αV2» + (α + β) V( V.H), and powers of these operators Lp forp ≥ 1.
One of ordinary skill in the art will recognize that there are many possible forms of the synthesis equation. For example, in the synthesis equation presented above, the distance measure in the first term measures the relative position of points in the target image with respect to points in the template image. Although this synthesis equation uses a quadratic distance measure, one of ordinary skill in the art will recognize that there are other suitable distance measures.
CPU 204 then computes a second or fusing transformation (Step 370) using the synthesis equation relating all points within a region of interest in the target image to all corresponding points in the template image. The synthesis equation is defined so that the resulting transform incorporates, or fuses, the mapping of manifold landmarks to corresponding target image points determined when calculating the first transform.
The computation using the synthesis equation is accomplished by solving a sequence of optimization problems from coarse to fine scale via estimation of the basis coefficients μ . This is analogous to multi-grid methods, but here the notion of refinement from coarse to fine is accomplished by increasing the number of basis components d. As the number of basis functions increases, smaller and smaller variabilities between the template and target are accommodated. The basis coefficients are determined by gradient descent, i.e.,
Figure imgf000018_0001
where
d#(«(π))
-γ f (T(x - uM(x)) - , dμk
Figure imgf000018_0002
and
"(π)W Ax + b (15)
Figure imgf000018_0003
also Δ is a fixed step size and λ^ are the eigenvalues of the eigenvectors φk .
The computation of the fusion transformation (step 370) using the synthesis equation is presented in the flow chart of Fig. 8. Equation (12) is used to initialize the value of the displacement field u(x) = u(0>(x) (step 800). The basis coefficients μk = μk (0> are set equal to zero and the variables β„ A, and b are set equal to the solution of equation (11) (step 802). Equation (13) is then used to estimate the new values of the basis coefficients μk n'l) given the current estimate of the displacement field i "'(x) (step 804). Equation (15) is then used to compute the new estimate of the displacement field ι n>(x) given the current estimate of the basis coefficients μk (n> (step 806). The next part of the computation is to decide whether or not to increase the number d of basis functions φ^ used to represent the transformation (step 80S). Increasing the number of basis functions allows more deformation. Normally, the algorithm is started with a small number of basis functions corresponding to low frequency eigen functions and then on defined iterations the number of frequencies is increased by one (step 810). This coarse-to-fine strategy matches larger structures before smaller structures. The preceding computations (steps 804-810) are repeated until the computation has converged or the maximum number of iterations is reached (step 812). The final displacement field is then used to transform the template image (step 814).
Once CPU 204 determines the transform from the synthesis equation fusing both landmark manifold information and image data, CPU 204 uses this transform to register the template image with the target image (step 380).
The spectrum of the second transformation, h, is highly concentrated around zero. This means that the spectrum mostly contains low frequency components. Using the sampling theorem, the transformation can be represented by a subsampled version provided that the sampling frequency is greater than the Nyquist frequency of the transformation. The computation may be accelerated by computing the transformation on a coarse grid and extending it to the full voxel lattice e.g., in the case of 3D images, by interpolation. The computational complexity of the algorithm is proportional to the dimension of the lattice on which the transformation is computed. Therefore, the computation acceleration equals the ratio of the full voxel lattice to the coarse computational lattice.
One of ordinary skill in the art will recognize that composing the landmark transformations followed by the elastic basis transformations, thereby applying the fusion methodology in sequence can provide an alternatively valid approach to hierarchial synthesis of landmark and image information in the segmentation. Another way to increase the efficiency of the algorithm is to precompute the Green's functions and eigen functions of the operator L and store these precomputed values in a lookup table. These tables replace the computation of these functions at each iteration with a table lookup. This approach exploits the symmetry of Green's functions and eigen functions of the operator L so that very little computer memory is required. In the case of the Green's functions, the radial symmetry is exploited by precomputing the Green's function only along a radial direction.
The method described for fusing landmark information with the image data transformation can be extended from landmarks that are individual points (0-dimensional manifolds) to manifolds of dimensions 1, 2 and 3 corresponding to curves (1-dimensional), surfaces (2-dimensional) and subvolumes (3-dimensional). For example, Fig. 4 shows a template image 400 of a section of a brain with 1- dimensional manifolds 402 and 404 corresponding to target image 406 1-dimensional manifolds 408 and 410 respectively. Fig. 5 shows a template image 500 of a section of a brain with 2-dimensional manifold 502 corresponding to target image 504 2-dimensional manifold 506. Fig. 6 shows a template image 600 of a section of a brain with 3-dimensional manifold 602 corresponding to target image 604 3-dimensional manifold 606.
As with the point landmarks, these higher dimensional manifolds condition the transformation, that is we assume that the vector field mapping the manifolds in the template to the data is given. Under this assumption the manually- assisted deformation (step 350, Fig. 3) becomes the equality-constrained Bayesian optimization problem:
u(x) = arg min f Lu(x)\2dx 6)
subject to
u(x) = x) X € U M(ϊ). (17) ι=0
If M(i) is a smooth manifold for = 0, 1, 2, 3, the solution to this minimization is unique satisfying LτLύ(x) = 0, for all template points in the selected manifold. This implies that the solution can be written in the form of a Fredholm integral equation:
ύ(x) = f K(xy)$(y)dS(y), where K = GG ^
(18)
U.W
and G the Green's function of L.
When the manifold is a sub-volume, M(3),dS is the Lebesgue measure on P For 2- dimensional surfaces, dS is the surface measure on M(2), For 1-dimensional manifolds (curves), dS is the line measure on Mfl) and for point landmarks, M(0), dS is the atomic measure. For point landmarks, the Fredholm integral equation degenerates into a summation given by equation (10). When the manifold of interest is a smooth, 2-dimensional surface, the solution satisfies the classical Dirichlet boundary value problem:
L fLύ(x) = O.VxeΩ (19)
The Dirichlet problem is solved using the method of successive over relaxation as follows. If uk(x) is the estimate of a deformation field at the k,h iteration, the estimate at the (k ÷ 1)'A iteration is given by the following update equation:
u k ''(x) = k(x) * aL/Lufx) , x ε Ω \ M. ki -t(Mx) = ^ k(Mx), x „ e Ϊ-M , (20)
where α is the over relaxation factor.
It is also possible to compute the transform (step 370) with rapid convergence by solving a series of linear minimization problems where the solution to the series of linear problems converges to the solution of the nonlinear problem. This avoids needing to solve the nonlinear minimization problem directly. Using a conjugate gradient method, the computation converges faster than a direct solution of the synthesis equation because the basis coefficients μk are updated with optimal step sizes.
Using the conjugate gradient, the displacement field is assumed to have the form d (x) =∑μk<Pβ) ~ f(x) (21)
where
f(χ) - ∑βftxx) + Ax + b . (22)
Begin by assuming that /7r) is fixed. This is generalized below. The eigen functions in the expansion are all real and follow the assumption that {φ, (.r)} are H3 valued. The minimization problem is solved by computing old .
- j - j J 0.. d (23)
to update the basis coefficients in equation (21) where μ} = 0J = 0 d initially, Δy is computed using the equation
Δj -
Figure imgf000022_0002
Figure imgf000022_0001
where hl (x) = V T[_u w φ, (x), and where θtJ (x) =φJ (x) φ} (x). The notation/** is the inner-
product, i.e., f*g = 3^, figi for f. g e .
Similarly, since u(x) is written in the series expansion given in equations (21) and (22), the identical formulation for updating β, arises. Accordingly, the fusion achieved by the present invention results. Computation of equation (23) repeats until all Δ fall below a predetermined threshold solving for each Δy in sequence of increasing j, and Δ is computed using the values of Δk for 0 < k <j. A further improvement over prior art image registration methods is achieved by computing the required transforms using fast Fourier transforms (FFT). Implementing an FFT based computation for image registration using a synthesis equation, as required at step 370 of Fig. 3, provides computational efficiency. However, to exploit the known computational efficiencies of FFT transforms, the solution of the synthesis equation must be recast to transform the inner products required by iterative algorithms into shift invariant convolutions. To make the inner-products required by the iterative algorithms into shift invariant convolution, differential and difference operators are defined on a periodic version of the unit cube and the discrete lattice cube. Thus, the operators are made cyclo-stationary, implying their eigen functions are always of the form of complex exponentials on these cubes having the value:
.« ? J <^**> c2i (25)
-3-fc
r = 1, 2, 3 with x = (x,,χ:,x3) e [0, l]3 ,
ωh=2πkl , i - 1, 2, 3, and the Fourier basis for periodic functions on [0, l]3 takes the form
g <ω^ ω^x = ω^ +ω^ω^xy ω* = (ω* 'ω t,'ωi.) On the discrete NJ periodic lattice,
For real expansions, the eigen vectors becomes
2 dcl J 2 2^
; {o,ι -v - i}3. [ ~N ~ ff I φ k(x) = Ψk(.r) ÷Ψk '(χ) and the real expansion in equation (21) becomes:
Figure imgf000023_0001
where * means complex conjugate, 0 < rf < ? and
This reformulation supports an efficient implementation of the image registration process using the FFT. Specifically, if step 370 of Fig. 3, computing the registration transform fusing landmark and image data, is implemented using the conjugate gradient method, the computation will involve a series of inner products. Using the FFT exploits the structure of the eigen functions and the computational efficiency of the FFT to compute these inner-products. For example, one form of a synthesis equation for executing Step 370 of Fig. 3 will include the following three terms:
Term 1 : f (T(x - u(x)) - S(x))hj(x)dx
Figure imgf000024_0001
Term 3: u(x) = ∑ μkφk(x) k=0
Each of theses terms must be recast in a suitable form for FFT computation. One example of a proper reformulation for each of these terms is:
Term 1:
[ (T (x - u(x)) - S (x))VT (ψfr) ÷ ψ »)Λ
(27)
2Re [ (T(x - u(x)) - S(x)) ∑ VT e^^dx J n r = l
where
Figure imgf000024_0002
, c Λ2Wk , c „3(k')]1. This equation is computed for all A' by a Fourier
transformation of the function.
(7 (x - u(x)) - S (x)) vr cl W (28)
Figure imgf000024_0003
and hence can be computed efficiently using a 3-D FFT. Term 2:
Figure imgf000025_0001
The integral in the above summation for all k can be computed by Fourier transforming the elements of the 3 x 3 matrix:
V7/(VD' (30)
evaluated at cok + ω. Because this matrix has diagonal symmetry, the nine FFTs in this reformulation of term 2 can be computed efficiently using six three dimensional FFTs evaluated at co. + ω.
Term 3:
Using the exact form for the eigen functions we can rewrite the above equation as
u(x) = 2Re (31)
Figure imgf000025_0002
This summation is precisely the inverse Fourier transforms of the functions ∑:=, c^ for i -- 1,2,3
and hence can be computed efficiently by using a 3-D FFT.
One of ordinary skill in the art will recognize that restructuring the computation of registration transforms using FFTs will improve the performance of any image registration method having terms similar to those resulting from a synthesis equation fusing landmark and image data. Improvement results from the fact that many computer platforms compute FFTs efficiently; accordingly, reformulating the registration process as an FFT computation, makes the required computations feasible.
A distance function used to measure the disparity between images is the Gaussian square error distance j | T(x-u(x)) - S(x) \ 2άx. There are many other forms of an appropriate distance measure. More generally, distance functions, such as the correlation distance, or the Kullback Liebler distance, can be written in the form \ D(T(x-u(x)) , S(x))dx.
An efficient convolution implementation can be derived using the FFT for arbitrary distance functions. Computing the fusing transform using the image data follows the equation:
ύ = arg min γ
Figure imgf000026_0001
where D(.,.) is a distance function relating points in the template and target images. The displacement field is assumed to have the form:
d
Figure imgf000026_0002
where .
Figure imgf000026_0003
is fixed. The basis coefficients { μ. are determined by gradient descent, i.e.,
Figure imgf000026_0004
where the gradient is computed using the chain rule and is given by the equation
Figure imgf000027_0001
where D ' (.,.) is the derivative with respect to the first argument. The most computationally intensive aspect of the algorithm is the computation of the term
[ D'(T(x - u(π)(x)) , S(x))VT(x - u^(x)) . φk(x)dx JΩ
Using the structure of the eigen functions and the computational efficiency of the FFT to compute these inner-products, the above term can be written as
3 2Re [ π(T(x - u^(x)) , S(x))(∑ W ) e^^dx
where
Figure imgf000027_0002
This equation is computed for all λ' by a Fourier transformation
of the function
D'(T (x - u ^(x)) , S (x)) (∑ vr - C£ J r= l
and hence can be computed efficiently using a 3-D FFT. The following example illustrates the computational efficiencies achieved using FFTs for image registration instead of direct computation of inner-products. Assuming that a target image is discretized on a lattice having NJ points, each of the inner-products in the algorithm, if computed directly, would have a computational complexity of the order (Ν3)2. Because the inner-products are computationally intensive, the overall complexity of image registration is also (Ν3)2. In contrast, each of the FFTs proposed has a computational complexity on the order of Ν3log, N3. The speed up is given by the ratio N6/(N log2 N3) = N3/(3 log, N). Thus the speed up is 64 times for a 16 x 16 x 16 volume and greater than 3.2 x 10"* speed up for a 256 x 256 x 256 volume. A further factor of two savings in computation time can be gained by exploiting the fact that all of the FFTs are real. Hence all of the FFTs can be computed with corresponding complex FFTs of half the number of points. For a development of the mathematics of FFTs see, A.V. Oppenheim and R.W. Schafer, Digital Signal Processing, Prentice-Hall, New Jersey, 1975 (hereinafter referred to as Oppenheim). Alternative embodiments of the registration method described can be achieved by changing the boundary conditions of the operator. In the disclosed embodiment, the minimization problem is formulated with cyclic boundary conditions. One of ordinary skill in the art will recognize that alternative boundary conditions, such as the Dirichlet, Neumann, or mixed Dirichlet and Neumann boundary conditions are also suitable. The following equation is used in an embodiment of the present invention using one set of mixed Dirichlet and Neumann boundary conditions:
= k) = 0 /or i, j = 1,2,3; i ≠ j; 0,1 : (37)
Figure imgf000028_0001
v. here the notation (x|,t = k) means x is in the template image such that .t = k. In this case, the eigen functions would be of the form: C -1W(t cos ωA/ -V, sin ωy sin o^ x3
Φ?
Figure imgf000029_0001
sin ωw , cos co-, sin ω ii for 1,2, (38) r)
°3<fc sm ωt/ j sin ωy cos ωy x2
Modifying boundary conditions requires modifying the butterflies of the FFT from complex exponentials to appropriate sines and cosines.
In Fig. 7, four images, template image 700, image 704, image 706, and target image 708, illustrate the sequence of registering a template image and a target image. Template image 700 has 0-dimensional landmark manifolds 702. Applying the landmark manifold transform computed at step 350 in Fig. 3 to image 700 produces image 704. Applying a second transform computed using the synthesis equation combining landmark manifolds and image data to image 700 produces image 706. Image 706 is the final result of registering template image 700 with target image 708. Landmark manifold 710 in image 708 corresponds to landmark manifold 702 in template image 700.
Turning now to techniques to register images which may possess large deformation characteristics using diffeomorphisms, large deformation transform functions and transformations capable of matching images where the changes from one image to the other are greater than small, linear, or affine. Use of large deformation transforms provide for the automatic calculation of tangents, curvature, surface areas, and geometric properties of the imagery.
2. Methods for Large Deformation Landmark Based and Image Based Transformations An embodiment consistent with the present invention maps sets of landmarks in imagery {x:, / = 1,2, . . . , N} c Ω into target landmarks {)/, / = 1 , . . . , N}, and or imagery [η into target / both with and without landmarks. For example when there is a well-defined distance function D(u(T)) expressing the distance between the landmarks and or imagery. The large deformation maps h: Ω → Ω are constructed by introducing the time variable,
h : (x, t) = (x x2, xv ή € Ω * [0,7] - h(x,t) = ( , - uχ(x,t) 2 - u2 x,t)^ - uJx,f)) e Ω.
The large deformation maps are constrained to be the solution h(x, T) = x - u(x, T) where u(x, T) is generated as the solution of the ordinary differential equation
,O = /I <7 " Vu(x,i))v(x,f)dt, wherev(x,t) = ∑ vk s?k(x,t),(x,t) e Ωx[0,T] (39) k-0
assuming that the {φk} forms a complete orthonormal base.
A diffeomorphism for the landmark and image matching problem is given by the mapping of the imagery given by h(x,T) = x - ύ(x,T) satisfying the Ordinary Differential Equation (O.D.E.)
ύ(x,T) = [ T(I -V ύ (x,t))v (x,t)dt (40)
J o
where v x,t) = arg^ min [' f ||Zv(x,t)||2 dxdt + D(u(T)). (41
Figure imgf000030_0001
L is preferably a linear differential operator with the φk forming a complete orthonormal base as the eigenfunctions Lφk = λj. p^ 2.1 Large Deformation Landmark Matching
In order to register images with large deformation using a landmark matching technique, N landmarks are identified in the two anatomies target {xt, yf i = 1, 2, . . . , N}. The landmarks are identified with varying degrees of accuracy {27, = 1, . . . , N), the 27, 3x3 covariance matrices. The distance between the target and template imagery landmarks is preferably defined as .
N
DMT» - ∑ &i - xr *PT> (y, - *, - uQtpV) (42) ι = I
A preferable diffeomo hism is the minimizer of Eqns. 40, 41 with D(u(T)) the landmark distance:
ύ(x,T) = f T(I -Vu (x,t))v(x,t)dt (43)
where v(x,t) = arg min (H(U)(T)) . ( 4)
Figure imgf000031_0001
A method for registering images consistent with the present invention preferably includes the following steps: STEP 0: Define a set of landmarks in the template which can be easily identified in the target {xt : ,r εΩ, i = 1, 2 N] with varying degrees of accuracy in the target as {yi, = 1, ..., N] with associated error covariance {27, = 1, . . . , N}, and initialize for n = 0, v. = 0.
STEP 1 : Calculate velocity and deformation fields:
v M(x,t) = ∑ vk {n) φk (x,t),u W(x,T) - f T (I - W ^(x,t))v ^(x,t)dt. (45) STEP 2: Solve via a sequence of optimization problems from coarse to fine scale via estimation of the basis coefficients {v , analogous to multi-grid methods with the notion of refinement from coarse to fine accomplished by increasing the number of basis components. For each v.,
where (i (46)
(
Figure imgf000032_0001
STEP 3 : Set n - n + 1 , and return to Step 1.
2.2 Large Deformation Image Matching
Turning now to the technique where a target and template image with large deformation are registered using the image matching technique. The distance between the target and template image is chosen in accordance with Eqn. 47. A diffeomoφhic map is computed using an image transformation operator and image transformation boundary values relating the template image to the target image. Subsequently the template image is registered with the target image using the diffromoφhic map.
Given two images Iv I choose the distance
D2(u(T)) - γ f \lQ(x - u(x,T)) - I,(x)\2dx. (47)
J [0,1]3 '
The large deformation image distance driven map is constrained to be the solution h(x, T) = x - ιi(x, T) where
ύ(x,T) = f T (I - jύ(x,t))v(x,t)dt and (48)
J o A method for registering images consistent with the present invention preferably includes the following steps:
v(.t,t)
Figure imgf000033_0001
STEP 0: Measure two images Iff It defined on the unit cube Ω = [0,1]3 3, initialize parameters for n = 0, vfn> = 0, λ'= 0, 1, . . . , and define distance measure D(u(T)) Eqn. 47.
STEP 1 : Calculate velocity and deformation fields from "':
v <">(*,/) W (x,T) = f T(I - Vu M(x,t))v M(x,t)dt . (50)
Figure imgf000033_0002
STEP 2: Solve optimization via sequence of optimization problems from coarse to fine scale via re-estimation of the basis coefficients {v , analogous to multi-grid methods with the notion of refinement from coarse to fine accomplished by increasing the number of basis components. For each vk,
(„♦ !) . » -Δ Λ - aH(vW) v M" -Δ Λ f
Figure imgf000033_0004
Figure imgf000033_0003
where
^iΩl I = - f (IQ(X - u M(x,T)) -Ix(x))VJ0(x - u ("Xx,T)) - ^ ^ dx (52) cvk J Ω cvk STEP 3: Set « - « + 1, and return to Step 1.
The velocity field can be constructed with various boundary conditions, for example v(x,t) = 0, x 6 dΩ and t ε [0, 7], u(x, 0) = v(x, 0) = 0. The differential operator L can be chosen to be any in a class of linear differential operators; we have used operators of the form (-aΔ - bvv +ciy, p ≥ 1. The operators are 3 3 matrices
σ«j dwj θw, σx{ dx2 dx3 σw, du2 du2
'u = (53) dxl dx2 dx3 du2 du2 du2 oxχ dx2 dx3
-aA - bW- + cI (54)
Figure imgf000034_0001
2.3 Small deformation solution
The large deformation computer algorithms can be related to the small deformation approach described in the '212 patent, by choosing v = u, so that T- δ small, then approximate I - Vκ(-,σ) = / for σ ε [0, δ), then ύ(x, δ) = v^r)δ and defining u(x) = u(x, δ), so that Eqn. 55 reduces to the small deformation problems described in the '212 patent:
Figure imgf000035_0001
3. Composing Large Deformation Transformations Unifying Landmark and Image
Matching
The approach for generating a hierarchical transformation combining information is to compose the large deformation transformations which are diffeomoφhisms and can therefore be composed, h = hπ ° . . . h, °hr Various combinations of transformations may be chosen, including the affine motions, rigid motions generated from subgroups of the generalized linear group, large deformation landmark transformations which are diffeomoφhisms, or the high dimensional large deformation image matching transformation (the dimension of the transformations of the vector fields are listed in increasing order). Since these are all diffeomoφhisms, they can be composed.
4. Fast Method for Landmark Deformations Given Small Numbers of Landmarks
For small numbers of landmarks, we re-parameterize the problem using the Lagrangian frame, discretizing the optimization over space time Ω x 7, into N functions of time 7. This reduces the complexity by an order of magnitude given by the imaging lattice [Ω].
For this, define the Lagrangian positions of the N-landmarks , i - 1, . . . , Nas they flow through time φ.(-), i - 1, . . . , N, with the associated 3/V-vector
Figure imgf000035_0002
The particle flows Φ(t) are defined by the velocities v(-) according to the fundamental O.D.E.
v(φ(t ),t). (57) dt
It is helpful to define the 3/Vχ 3N covariance matrix A^(Φ(t)):
Figure imgf000036_0001
3Nχ3N
The inverse K(Φ(ή)'] is an Nx Nmatrix with 3 x 3 block entries (K(Φ(t))Λ)f. i.j = 1, . . . , N.
For the landmark matching problem, we are given N landmarks identified in the two anatomies target {,r, y? i = 1, 2, . . . , N], identified with varying degrees of accuracy {Σ., i=l, . . . Ν}, the Σ., 3 x 3 covariance matrices.
The squared error distance between the target and template imagery defined in the Lagrangian trajectories of the landmarks becomes
<v
D,(Φ(T)) - ∑ 0,- - φ(7rx,.))∑1-l(yι. - φ(7rx,.)). (59) ι = I
Then a preferable diffeomoφhism is the minimizer of Eqns. 40, 41 with Dt(Φ(T)) the landmark distance: v(-) - arg min f T Lv(x,t)fdxdt -Z>,(Φ(7)), (60)
where φ == ^ ,X) = v(φ(t ),t) (61) dt
A fast method for small numbers of Landmark matching points exploits the fact that when there are far fewer landmarks Nthan points in the image x e Ω, there is the following equivalent optimization problem in the N-Lagrangian velocity fields φ(x^ ), != I, . . . , N which is more computationally efficient than the optimization of the Eulerian velocity v(x., ), x E Ω, (|Ω| » N). Then, the equivalent optimization problem becomes
N N v(x,t) = ∑ Ktø(ts )∑ (K(φ(t)y φ(x t) (62)
Figure imgf000037_0001
where Φ "''''} 'arg pt K_ (t)y t) *D,(Φ(7 ), (63)
Z - l ,...,/V
Figure imgf000037_0002
and φ(x,T) = j v(φ(x,t),σ)dσ + x. (54)
This reduces to a finite dimensional problem by defining the flows on the finite grid of times, assuming step-size δ with Lagrangian velocities piecewise constant within the quantized time intervals: φ(xrkδ) - φ(xp(k- l)δ) φ(V) te[(k - l)δ,kδ) , k= l,...7/δ. (65)
Then the finite dimensional minimization problem becomes
Figure imgf000038_0001
* ∑ (φ(* 7/δ) -y ∑"1 (φ(χp77δ) - y ι = l
subject to: φ(xt , 0) = ,r , i = 1, . . . , N
The Method:
In order to properly register the template image with the target image when a small number of landmarks have been identified, the method of the present embodiment utilizes the Largrangian positions. The method for registering images consistent with the present invention includes the following steps: STEP 0: Define a set of landmarks in the template which can be easily identified in the target {.r : .r ε Ω, i = 1,2, . . . , N}, with varying degrees of accuracy in the target as {y(, = 1 , . . . , N) with associated error covariance {Σ? i = 1 , . . . , N},and initialize for n = 0, ■ "''(x , ) = xr STEP 1 : Calculate velocity and deformation fields:
Figure imgf000038_0002
STEP 2: Solve via estimation of the Lagrangian positions φ(k),k = 1, . . . , K.
For each φ(.v, k),
(67)
Figure imgf000039_0001
2(Φ(x^) - Φ(x k - I))/"*5 røΦW)-1) ^*
(k* >
(x k+ l) -φ(x k))[ dK(Φ)( )yι 1 J J K Φ røΦ(t))-') t(φ( - l) -φ( ; ( rø. δ [ ,k) j
STEP 3: Set n ~ n + 1, and return to Step 1.
STEP 4: After stopping, then compute the optimal velocity field using equation 62 and transform using equation 64.
5. Fast Greedy Implementation of Large Deformation Image Matching
For the image matching, discretize space-time Ω x 7 into a sequence of indexed in time optimizations, solving for the locally optimal at each time transformation and then forward integrate the solution. This reduces the dimension of the optimization and allows for the use of Fourier transforms.
The transformation h(-, t) : Ω - Ω where h(x, t) = x - u(x, t), and the transformation and velocity fields are related via the O.D.E d duu((xx.,tt)) v(x,t) + Vu(x,t)v(x,t),t ε[0,7J Preferably the deformation fields are di generated from the velocity fields assumed to be piecewise constant over quantized time
7 increments, v(x,i) = v(x, t. .),t e[iδ, (i+ l) δ), / = 1, .../ = — , δ δ the quantized time increment. Then the sequence of deformations u(x,tt),i = 1,..J is given by u(x,t l) = u(x,t - v(x,tι i jn ' X (I - Vu(x,σ))dσ} , i . ,/• (68)
For δ small, approximate V«(x,σ) = Vu( ,t.),σ ε [t,-,t,_,], then the global optimization is solved via a sequence of locally optimal solutions according to for t, 1= 1, . . . . , /,
where v(x,t ) = arg . (69>
Figure imgf000040_0001
The sequence of locally optimal velocity fields v(x,t.),i =1, .../ satisfy the O.D.E.
L f Lv(x,ti+ 1) = b(x,ύ(x,t( . ,)) where u(x,t. , ,) = ύ(x,t) + δ(/ - Vz2( ,t.))v( ,t.). (70)
Examples of boundary conditions include v(x,t) = 0, x ε d Ω α /t [0,7] and I the linear differential operator I = -α V2 - έV V ÷ c/*. The body force b(x-u(x,t)) is given by the variation of the distance D(u) with respect to the field at time t. The PDE is solved numerically (G. E. Christensen, R. D. Rabbitt, and M. I. Miller, "Deformable templates using large deformation kinematics," 7E££ Transactions on Image Processing, 5(10): 1435- 1447, October 1996 for details (hereinafter referred to as Christensen). To solve for fields u(-, t.) at each tt, expand the velocity fields ∑k Vk t^k(-). v(- ,t.) = ∑k Vk(tt)φk(-). To make the inner-products required by the iterative algorithms into shift invariant convolution, force the differential and difference operators L to be defined on a periodic version of the unit cube Ω = [0,1]3 and the discrete lattice cube {0,1, . . . , N - l }3. g denotes the inner-product/- g - ∑ =1 fg, for ε R3. The operators are cyclo-stationary in space, implying their eignen functions are of the form of complex exponentials on these cubes: , d = 1,2,3 with x = (-t.,*,,*,) ε [0,l]3,w. =
Figure imgf000041_0001
w,ύ)ϋιϋ),α)it = 2πkpi = 1,2,3, and the Fourier basis for periodic functions on [0,1 ]3 takes the form e1 ,<ωλτx> = (okJxl ÷ (ύk2x2 + βy^. On the discrete N3 =periodic
Ittk 2πitj 3-ak^ lattice, ω. = „xe{0,l,... - l γ. This supports an efficient
N N N implementation of the above algorithm exploiting the Fast Fourier Transform (FFT).
Suppressing the d subscript on vu and the summation from d = 1 to 3 for the rest of this section simplifies notation. The complex coefficients v^t.) = ak(t) + bk(t!) have complex-
conjugate symmetry because they are computed by taking the FFT of a real valued function as seen later in this section. In addition, the eigen functions have complex-conjugate symmetry due to the 2π periodicity of the complex exponentials. Using these two facts, it can be shown that the vector field
Figure imgf000041_0002
vfr = Σ v k( k(x = 2 Σ ok(tt Re{φk(x)} - bk( m{φk(x)} (71)
*=o *=0
is R3 valued even though both v t and φ x) are complex valued. The minimization problem is solved by computing for each vk = a^ + b ,
where
Figure imgf000041_0003
and gk € {akJ bk). Combining Eqs.70 and 71 and taking derivatives gives
δu^(x,tt
= 2δ(I - VuQcJftRetøJix)} da.
(73)
& <">(*,/,)
-2δ(I - Vu(x, Im{Ψk(x)}. dbt
Consider the computation of the following terms of the algorithm from equations 73,
Term 1: f (lo(x - u(x, - /,(*))
Figure imgf000042_0001
(/ - Vu(x,t )dx ..
Figure imgf000042_0002
Computation of Term 1: The first term given by
θ(',) = J i fΩa Σd j.r\ ( - ")(v,-)) - Λtøv/oi*-«fc ' e - V U(^ ) ΓW d t
can be written as
zjΔhxdx
Figure imgf000042_0003
This equation can be computed efficiently using three 3D FFTs of the form
MO = where f.(x,t) - |∑3,, (/„(* - u(x,t)) - (/ - va(x,/,))c (<
Figure imgf000042_0005
Figure imgf000042_0004
and s = 1 ,2,3. These FFTs are used to evaluate Eq. 72 by noticing:
dd(u , (^»)(x,tt)) dd(u ^(x,t )
-4yRe iB,(t)} and 4 Im{Qs,(tt)L
Figure imgf000043_0001
Computation of Term 2: The second term given by
Figure imgf000043_0002
Figure imgf000043_0004
can be computed efficiently using three 3D FFTs. Specifically the 3D FFTs are
Figure imgf000043_0003
for s = 1,2,3 and hfat} = ∑J-i ^) ^?- Using the FFT to compute the terms in the method provides a substantial decrease in computation time over brute force computation. For example, suppose that one wanted to process 2563 voxel data volumes. The number of iterations is the same in both the FFT and brute force computation methods and therefore does not contribute to our present calculation.
For each 3D summation in the method, the brute force computation requires on the order N6 computations while the FFT requires on the order 3N}log,(N) computations. For N1 = 2563 voxel data volumes this provides approximately a 7 x 105 speed up for the FFT algorithm compared to brute force calculation. 6. Rapid Convergence Algorithm for Large Deformation Volume Transformation
Faster converging algorithms than gradient descent exist such as the conjugate gradient method for which the basis coefficients v. are updated with optimal step sizes. The identical approach using FFTs follows as in the '212 patent. Identical speed-ups can be accomplished; see the '212 patent.
6.1 An extension to general distance functions
Thus far only the Gaussian distance function has been described measuring the disparity between imagery has been described as f \I0(x - u(x)) - Iχ(x)\2dx . More general distance functions can be written as [ D(I0(x - u(x))),Iχ(x))dx. A wide variety of distance functions are useful for the present invention such as the correlation distance, or the Kullback Liebler distance can be written in this form.
6.2 Computational Complexity
The computational complexity of the methods herein described is reduced compared to direct computation of the inner-products. Assuming that the image is discretized on a lattice of size N3 each of the inner-products in the algorithm, if computed directly, would have a computational complexity ϊ O((N3)2). As the inner-products are most computationally intensive, the overall complexity of the method is 0((N})2). Now in contrast, each of the FFTs proposed have a computational complexity of 0( log2 N3), and hence the total complexity of the proposed algorithm is 0(N2 log, N3). The speed up is given by the ratio /^ (N3 log2 N3) = N3 1(3 log2N). Thus the speed up is 64 times for 16 x 16 x 16 volume and greater than 3.2 x 104 speed up for a 256 x 256 x 256 volume. A further factor of two savings in computation time can be gained by exploiting the fact that all of the FFTs are real. Hence all of the FFTs can be computed with corresponding complex FFTs of half the number of points (see Oppenheim). 6.3 Boundary Conditions of the Operator
Additional methods similar to the one just described can be synthesized by changing the boundary conditions of the operator. In the previous section, the minimization problem was formulated with cyclic boundary conditions. Alternatively, we could use the mixed Dirichlet and Neumann boundary = 0 for if = 1,2,3; i ≠ j;
Figure imgf000045_0001
that . x. = k. In this case the eigen functions would be of the form
Figure imgf000045_0002
The implementation presented in Section 5 is modified for different boundary conditions by modifying the butterflys of the FFT from complex exponentials to appropriate sines and cosines.
7.0 Large Deformation Diffeomorphisms on a Sphere
In many applications, image registration algorithms operate on images of spherical objects. Accordingly, the following section extends the diffeomoφhic landmark matching technique introduced above to create an embodiment of the present invention for registering images of objects having spherical geometries. A method for registering images using large deformation diffeomoφhisms on a sphere comprises selecting a coordinate frame suitable for spherical geometries and registering the target and template image using a large deformation diffeomoφhic transform in the selected coordinate frame. While there are many applications that benefit from an image registration technique adapted to spherical geometries, one such example, registering brain images, is discussed herein to illustrate the technique. One skilled in the art will recognize that other imaging applications are equally suited for this technique, such as registering images of other anatomical regions and registering non-anatomical imagery containing spherical regions of interest.
An application of brain image registration is the visualization of conical studies. Current methods of visualizing cortical brain studies use flat maps. Although flat maps bring the buried cortex into full view and provide compact representations, limitations are introduced by the artificial cuts needed to preserve topological relationships across the cortical surface. Mapping the cortical hemisurface to a sphere, however, allows points on the surface to be represented by a two-dimensional coordinate system that preserves the topology. Spherical maps allow visualization of the full extent of sulci and the buried cortex within the folds. An embodiment consistent with the present invention generates large deformation diffeomoφhisms on the sphere S2 which has a one-to-one correspondence with a reconstructed cortical surface.
These diffeomoφhisms are generated as solutions to the transport equation: d$ (x,t)
= v(<Kx,t),t),<j>(x,0) = x SV 6 [0,l]. (76) dt
For the spherical coordinate maps, the final transformed coordinate map is defined as φ ( ,1) ε
S 2 and controlled through the ODE in equation 76 by the velocity field v(y), t ε [0, 1] constrained to be in the tangent space of the sphere.
The template image and target image spheres are characterized by the set of landmarks {x„,yπ, n - 1,2, ..N}ε S 2. Diffeomoφhic matches are constructed by forcing the velocity fields to be associated with quadratic energetics on S 2 x [0, 1]. The diffeomoφhic landmark matching is constructed to minimize a running smoothness energy on the velocity field as well as the end point distance between the template and target landmarks.
Given noisy template landmarks xπ ε S 2 matched to target landmarks^ ε S 2 measured with error variances σ n " , a suitable diffeomoφhism is gi%"en by :
d 0 ( , () = v(ό (.r,t), t),ό ( ,0) = .r s Sz, t e [0,1] (77) at
where v( ) = arg (78)
Figure imgf000047_0001
where I is a differential operator, Ψ(x,y) is the solid angle between points x,y on the sphere, E , = 1, 2 are the coordinate frames on the sphere, and dμ(x) = sinψdψdθ is the surface measure.
The velocity field should be represented so that v(-,-) = ∑~_ v, (-,•)£,■(•) are in the span of E,
and £,, a basis that spans the tangent space of the sphere S 2. An azimuth/elevation parameterization for this space is: θ ε[0, 2π ),ψ ε (0, π), let .r be the chart: x(Q , ψ ) = (siny cosQ , siny sinQ , cos ). (79)
dx ■ dx
The coordinate frames are given by — and according to :
J dQ d
dx ~T = (~ sin sinQ , siny cosθ ,0), (80) av
dx
— — = (- cosy cosQ , cosy sinQ - siny ). (31)
However, these frames vanish at ψ = 0, π, namely the north and south poles of the sphere and leave these two points transformation-invariant. When no non-vanishing continuous vector field exists on the sphere S 2, an alternative parameterization that has non-zero coordinate frames everywhere except at one point is appropriate. This is acceptable because each hemisphere of the brain is mapped to a sphere individually, and the area they are attached to each other is transformation invariant. For some applications, this conforms to an anatomical constraint, e.g., the point at which the brain attached to the spine. Moreover, the location producing zero-valued coordinate frames can be user selected by computing a rigid alignment to move this location in the image. One example of such an alignment involves aligning the sets of landmarks by computing a rigid transform (which, in a spherical coordinate frame, is a rotation) and then applying another rigid transform to ensure that the fixed points are aligned correctly.
An embodiment of the present invention uses stereographic projection to parameterize the unit sphere corresponding to the right hemisphere of the brain, with the center shifted to (-1,0,0) and take the shadow (u,v) of each point in the yz plane while shining a light from point (-2,0,0). Note that the shadow of the point from where the light source is located is a infinity, and this becomes the point where the coordinate frames vanish. The location of the light source and the projection plane can be adjusted to place the transformation invariant point as needed. Let P denote the stereographic projection from S 2 \ (-2,0,0) to (u,v) ε JF2:
JP:(^1(θ,ψ),.v2(θ,ψ),^(θ,ψ))^(«(θ,ψ),v(θ,ψ))=(-::^-,-:^3-)ε^2 1 (82)
2 + ΛT, 2 + x,
and the chart F mapping (u, v) back into S 2 is given by:
Figure imgf000048_0001
The coordinate frames on the sphere E,(θ,ψ), E2(θ,y) become:
I6«(9,y) (4v(θ,uQ2-4a(θ,ψ)2÷16) -8«(θ,ψ)v(8,ψ) ,ψ ~ V(θ,ψ)+v2(θ,ψ) + 4)2' (u(θ,ψ)2÷v(θ>V)2÷4)2 (θ,ψ)2 + v(θ,γ)2 + 4)2 l8
E(Q . , I6v(θ,ψ) -8«(θ,ψ)v(θ,ψ) (4v(θ,ψ)2-4»(θ,ψ)2÷16) 'ψ - t( Θ,ψ)2 + v(θ>V)2 + 4)2 , Θ,ψ)2÷v(θ,u)2÷4)2' (W(θ,Ψ)2 + v(θ,ψ)2 + 4)2 • ^}
Both E, and E2 are 0 when p=(-2,0,0), but that is the transformation invariant point on the sphere representing the right hemisphere.
A similar parameterization can be obtained for the left hemisphere by shifting the center to (1,0,0) and shining the light from (2,0,0) which becomes the transformation invariant point. Accordingly, an appropriate coordinate frame for images containing objects having spherical geometry is generated for subsequent image registration.
Large deformation landmark matching on the sphere proceeds with diffeomoφhisms constructed as solutions to the transport equation by forcing velocity fields to minimize quadratic energetics on 52 x [0, 1] defined by the Laplacian. The Laplacian operator on functions defined on the sphere in azimuth-elevation parameters is:
Figure imgf000049_0001
The energy associated with v((θ , ψ ), t) = ∑" v(. ((θ , ψ ), t)Ej (θ , ψ ) is:
2
E(v)= f \ ∑ \V 2vs(( , y ),t)\2 sinydy dt. (87)
The covariance operator is then computed using spherical harmonics. The covariance operator (which is the Green's function squared of the Laplacian operator) will determine the solution to the diffeomoφhic matching (see equation 93 below). Spherical harmonics Yπm of order n form a complete orthogonal basis of the sphere that correspond to the eigenfunctions of the differential operator V27πm= - n(n + l)Y„m. There are (2n ÷ 1) spherical harmonics of order n for each n and they are of the even and odd harmonic form with 0 < m < n: (Θ ,Ψ ) = k^P^cosy )cosmθ (88)
Yi(θ , ψ ) = kπmP:(cosy )sinmQ (89)
with P "m the associated Leg °endre poly Jnomials, and kn '"m" = J y ~' — (rt+ m -)1! normalization
constants.
The Green's function on the sphere for two points p and q becomes
W (?). (90)
Figure imgf000049_0002
Orthonormality of spherical harmonics and the addition theorem for spherical harmonics result in a covariance operator of the form:
Figure imgf000049_0003
= κ(ψ(p-q))=|,^w ^Ip"(cosψ(p'cι)) (9,) where Ψ(p,q) = aτccos(plql + p2q2 + 3<73) is the solid angle between the points /? and q. The covariance is shift invariant on the sphere and coordinate-free.
The resulting 2 x 2 covariance matrix K(p,q) and the 2N x 2N matrix K(φ(t)) are:
Figure imgf000050_0001
Figure imgf000050_0002
Instead of trying to solve the optimization problem directly in the velocity fields over all of space-time v(x,t) ε S 2 x [0, 1], the N-landmark trajectories and velocities are estimated as follows.
The large deformation landmark matching transformation on the sphere is given by the diffeomoφhism:
Figure imgf000050_0003
with v solving the minimum problem
v(v) = arg min «,, lv *. f .ΨMΛ+ f" yn ))
Figure imgf000051_0001
(92)
where dμ(x) = sinφdφdθ and v (x,t) satisfies 0(,t,t) = ∑~_ vi(χ,t)E,(x) :
Figure imgf000051_0002
Figure imgf000051_0003
Figure imgf000051_0004
where Ψ(y„, φ(x„, 1)) is the solid angle between the target landmark yn and the final position of the template landmark φ(x„, 1).
The algorithm for landmark matching reduces the problem to a finite dimensional problem by defining the flows on the finite grid of fixed times of size δ, tk = kδ, k = 0,1, . . . , K
1 = — . An embodiment of the present invention uses a gradient algorithm to register the images o
using the selected spherical coordinate frame and the derived large deformation diffeomoφhic transform.
The finite dimensional minimization becomes v, (x„,0) = 0, n = 1 , . . . , N, i=\, 2 and
Figure imgf000052_0001
π=l._,N, k=l._,K
Figure imgf000052_0002
The gradient algorithm for minimizing Eqn.95 initializes with m = 0 and v(0)(x„, t^ = 0, n = l, ...N,k-l, ... ,K, and iterates for m = 0, 1, ...: Calculate gradient perturbation for each v,(xΛ, tk), n = 1 , ...N,k=l, ... ,K,i= 1 ,2:
vr'fe'* )) rυ(*„ V
Figure imgf000052_0003
where
Figure imgf000052_0004
δ[t;. - 1] = 1 for tk - 1, and 0 otherwise, Δ is the gradient step. After stopping, define the final iterate as v, = v(m~ , and for all x ε S 2
Figure imgf000053_0001
8. Apparatus for Image Registration
Fig. 2 shows an apparatus to carry out an embodiment of this invention. A medial imaging scanner 214 obtains image 100 and 120 and stores them in computer memory 206 which is connected to computer control processing unit (CPU) 204. One of the ordinary skill in the art will recognize that a parallel computer platform having multiple CPUs is also a suitable hardware platform for the present invention, including, but not limited to, massively parallel machines and workstations with multiple processors. Computer memory 206 can be directly connected to CPU 204, or this memory can be remotely connected through a communications network.
The methods described herein use information either provided by an operator, stored as defaults, or determined automatically about the various substructures of the template and the target, and varying degrees of knowledge about these substructures derived from anatomical imagery, acquired from modalities like CT, MRI, functional MRI, PET, ultrasound, SPECT, MEG, EEG, or cryosection. For example, an operator can guide cursor 210 using pointing device 208 to select in image 100.
The foregoing description of the preferred embodiments of the present invention has been provided for the puφose of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously many modifications, variations and simple derivations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims

We claim:
1. A method for registering a template image with a target image, the template image containing a plurality of points and the target image containing a plurality of points, comprising: defining at least one manifold landmark point in said template image; identifying at least one point in said target image corresponding to said defined at least one manifold landmark point; selecting a coordinate frame suitable for spherical geometries; computing a large deformation transform using said selected coordinate frame, at least one manifold landmark transformation operator, and at least one manifold landmark transformation boundary value, said large deformation transform relating said at least one manifold landmark point in said template image to said corresponding at least one point in said target image; and registering said template image with said target image using said large deformation transform in the selected coordinate frame.
2. The method of claim 1, wherein the step of computing a large deformation transform further includes the substep of computing a diffeomoφhic transform.
3. The method of claim 1, wherein the step of computing a large deformation transform further includes the substep of accessing at least one precomputed transform value.
4. The method of claim 1, wherein the step of computing a large deformation transform further includes the substep of using a fast Fourier transform.
5. The method of claim 1, wherein the step of defining at least one manifold landmark point in said template image includes the substep of: performing at least one of: defining at least one manifold landmark point of dimension greater than zero in said template image; defining at least one point in said template image; defining points of a curve in said template image; defining points of a surface in said template image; or defining points of a volume in said template image.
6. The method of claim 1 , wherein the step of computing a large deformation transform includes the substep of: using at least one of a linear differentiable operator, a periodic boundary value, an infinite boundary value, mixed Dirichlet and Neumann boundary values, a Neumann boundary value, or a Dirichlet boundary value.
7. A method for registering a template image with a target image, comprising: selecting a coordinate frame suitable for spherical geometries; and registering the target and template image using a large deformation transform in the selected coordinate frame.
8. The method of claim 7, wherein the step of registering the target and template image further includes the substep of computing a diffeomoφhic transform.
9. The method of claim 7, wherein the step of registering the target and template image further includes the substep of accessing at least one precomputed transform value.
10. The method of claim 7, wherein the step of registering the target and template image further includes the substep of using a fast Fourier transform.
11. The method of claim 7 further including, prior to registering the target and template image: performing at least one of: defining at least one manifold landmark point of dimension greater than zero in said template image; defining at least one point in said template image; defining points of a curve in said template image; defining points of a surface in said template image; or defining points of a volume in said template image.
12. A method for registering a template image with a target image, wherein the template image contains a plurality of points and the target image contains a plurality of points, comprising: selecting a coordinate frame suitable for spherical geometries; computing a first large deformation transform using at least one manifold landmark transformation operator relating at least one manifold landmark point in said template image to at least one corresponding point in said target image; computing a second large deformation transform using at least one image transformation operator relating said template image to said target image; fusing said first and second large deformation transforms; and registering said template image with said target image using said fused large deformation transforms in said selected coordinate frame.
13. An apparatus for registering a template image with a target image, the template image containing a plurality of points and the target image containing a plurality of points, comprising: means for defining at least one manifold landmark point in said template image; means for identifying at least one point in said target image corresponding to said defined at least one manifold landmark point; means for selecting a coordinate frame suitable for spherical geometries; means for computing a large deformation transform using said selected coordinate frame, at least one manifold landmark transformation operator, and at least one manifold landmark transformation boundary value, said large deformation transform relating said at least one manifold landmark point in said template image to said corresponding at least one point in said target image; and means for registering said template image with said target image using said large deformation transform in the selected coordinate frame.
14. An apparatus for registering a template image with a target image, the template image containing a plurality of points and the target image containing a plurality of points, comprising: means for selecting a coordinate frame suitable for spherical geometries; and means for registering the target and template image using a large deformation transform in the selected coordinate frame.
15. An apparatus for registering a template image with a target image, the template image containing a plurality of points and the target image containing a plurality of points, comprising: a coordinate frame selector for selecting a coordinate frame suitable for spherical geometries; and a registration processor for registering the target and template image using a large deformation transform in the selected coordinate frame.
16. An apparatus for registering a template image with a target image, wherein the template image contains a plurality of points and the target image contains a plurality of points, comprising: a pointing device for defining at least one manifold landmark point in said template image and for identifying at least one point in said target image corresponding to said at least one defined manifold landmark point; a coordinate frame selector for selecting a coordinate frame suitable for spherical geometries; a first data processing unit for computing a first large deformation transform using a manifold landmark transformation operator relating said at least one manifold landmark point in said template image to said corresponding at least one point in said target image; a distance quantifier for defining a distance between said target image and said template image; a second data processing unit for computing a second large deformation transform using an image transformation operator relating said template image to said target image; and a third data processing unit for fusing said first and second large deformation transforms; and a fourth data processing unit for registering said template image with said target image using said fused large deformation transforms in said coordinate frame suitable for spherical geometries.
17. A computer program product for use in a computer adapted for registering a template image with a target image, the computer program product comprising a computer readable medium for storing computer readable code means, which when executed by the computer, enables the computer to register a template image with a target image, and wherein the computer readable code means includes computer readable instructions for causing the computer to execute a method comprising: selecting a coordinate frame suitable for spherical geometries; and registering the target and template image using a large deformation transform in the selected coordinate frame.
PCT/US2000/025971 1999-09-22 2000-09-22 Method and apparatus for image registration using large deformation diffeomorphisms on a sphere WO2001022352A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP00965280A EP1222608A1 (en) 1999-09-22 2000-09-22 Method and apparatus for image registration using large deformation diffeomorphisms on a sphere
AU76019/00A AU7601900A (en) 1999-09-22 2000-09-22 Method and apparatus for image registration using large deformation diffeomorphisms on a sphere

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15514199P 1999-09-22 1999-09-22
US60/155,141 1999-09-22

Publications (1)

Publication Number Publication Date
WO2001022352A1 true WO2001022352A1 (en) 2001-03-29

Family

ID=22554253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/025971 WO2001022352A1 (en) 1999-09-22 2000-09-22 Method and apparatus for image registration using large deformation diffeomorphisms on a sphere

Country Status (3)

Country Link
EP (1) EP1222608A1 (en)
AU (1) AU7601900A (en)
WO (1) WO2001022352A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109952597A (en) * 2016-11-16 2019-06-28 索尼公司 Brain registration between patient

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4906940A (en) * 1987-08-24 1990-03-06 Science Applications International Corporation Process and apparatus for the automatic detection and extraction of features in images and displays

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4906940A (en) * 1987-08-24 1990-03-06 Science Applications International Corporation Process and apparatus for the automatic detection and extraction of features in images and displays

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHRISTENSEN ET AL.: "Deformable template using large deformation kinematics", IEEE, vol. 5, no. 10, October 1996 (1996-10-01), pages 1435 - 1447, XP002935094 *
CHRISTENSEN ET AL.: "Volumetric transformation of brain anatomy", IEEE, vol. 16, no. 6, December 1997 (1997-12-01), pages 864 - 877, XP002935093 *
DAVATZIKOS ET AL.: "Image registration based on boundary mapping", IEEE, vol. 15, no. 1, February 1996 (1996-02-01), pages 112 - 115, XP002935092 *
VARGA ET AL.: "An iterative elastic stretching technique applied to thermographic images", IEEE, May 1989 (1989-05-01), pages 324 - 328, XP002935091 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109952597A (en) * 2016-11-16 2019-06-28 索尼公司 Brain registration between patient
CN109952597B (en) * 2016-11-16 2023-03-31 索尼公司 Inter-patient brain registration

Also Published As

Publication number Publication date
AU7601900A (en) 2001-04-24
EP1222608A1 (en) 2002-07-17

Similar Documents

Publication Publication Date Title
US6633686B1 (en) Method and apparatus for image registration using large deformation diffeomorphisms on a sphere
EP1057137B1 (en) Rapid convolution based large deformation image matching via landmark and volume imagery
EP0910832B1 (en) Method and apparatus for image registration
Zhou et al. Cocosnet v2: Full-resolution correspondence learning for image translation
US6408107B1 (en) Rapid convolution based large deformation image matching via landmark and volume imagery
Botsch et al. Primo: coupled prisms for intuitive surface modeling
Liwicki et al. Euler principal component analysis
Fletcher et al. Gaussian distributions on Lie groups and their application to statistical shape analysis
US7561757B2 (en) Image registration using minimum entropic graphs
CN108027878A (en) Method for face alignment
WO2001043070A2 (en) Method and apparatus for cross modality image registration
Steedly et al. Spectral Partitioning for Structure from Motion.
Gu et al. Matching 3d shapes using 2d conformal representations
Koehl et al. Automatic alignment of genus-zero surfaces
Grossmann et al. Computational surface flattening: a voxel-based approach
Cootes Statistical shape models
WO2001022352A1 (en) Method and apparatus for image registration using large deformation diffeomorphisms on a sphere
Tristán et al. A fast B-spline pseudo-inversion algorithm for consistent image registration
CN113723208A (en) Three-dimensional object shape classification method based on normative equal transformation conversion sub-neural network
Filip et al. Regularized multi-structural shape modeling of the knee complex based on deep functional maps
Younes Combining geodesic interpolating splines and affine transformations
Shen et al. Fourier methods for 3D surface modeling and analysis
Srivastava et al. Maximum-likelihood estimation of biological growth variables
Patane et al. Surface-and volume-based techniques for shape modeling and analysis
Raviv et al. LRA: Local rigid averaging of stretchable non-rigid shapes

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2000965280

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2000965280

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2000965280

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP