WO2016131955A1 - Automatic 3d model based tracking of deformable medical devices with variable appearance - Google Patents

Automatic 3d model based tracking of deformable medical devices with variable appearance Download PDF

Info

Publication number
WO2016131955A1
WO2016131955A1 PCT/EP2016/053556 EP2016053556W WO2016131955A1 WO 2016131955 A1 WO2016131955 A1 WO 2016131955A1 EP 2016053556 W EP2016053556 W EP 2016053556W WO 2016131955 A1 WO2016131955 A1 WO 2016131955A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
models
recorded
image processing
processing system
Prior art date
Application number
PCT/EP2016/053556
Other languages
French (fr)
Inventor
Christian Haase
Eberhard Sebastian Hansis
Dirk Schäfer
Michael Grass
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2016131955A1 publication Critical patent/WO2016131955A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10121Fluoroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10124Digitally reconstructed radiograph [DRR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30052Implant; Prosthesis

Definitions

  • the invention relates to an image processing system, to an image processing method, to a computer program element, and to a computer readable medium.
  • an image processing system comprising a model generator that includes an interpolator configured to compute, for a given one or more deformation parameters, from at least two pre-recorded models representing a deformable object in different shape states, at least one interpolated model representative of a deformed shape of the object.
  • the image processing system further comprises a navigation module configured to determine a current 3D position of the object in an examination region from a current 2D X-ray projection image.
  • the navigation module further includes a 2D-3D registration module configured to register a footprint of object in the current 2D X-ray projection image with a best matching one of the pre-recorded and interpolated models. Once the registration has been established, the 3D position of the object in the examination region can be accurately determined. Thus, a 3D position of the object can be obtained from a single 2D X-ray projection.
  • the shape as per the interpolated model is intermediate between the shapes as per the input models.
  • the deformation parameter represents an additional degree of freedom (on top of a parameter(s) for a mere rigid rotation and/or translation) for deformations and allows the 2D/3D registration of devices that cannot be properly described by a single rigid state. A rigid simplification for such devices would lead to inaccurate 2D/3D registration results.
  • the non-rigid parameter may describe for instance an opening and/or closing of a clamp, an expansion of a stent, or a bending of a catheter (tip), or any other type of deformation.
  • the one or more deformation parameters represent the transition between initial object shapes as recorded by the pre-recorded models.
  • the operation of the interpolator is based on a morphing algorithm.
  • said deformation parameter has an upper bound and a lower bound, wherein the lower bound is associated with one of the pre-recorded models and the upper bound is associated with the other one of the pre-recorded models, wherein said interpolator is capable of computing an interpolated image for any parameter in the range between the lower and upper bounds.
  • the pre-recorded models represent 'extreme' device state, for example a fully open and fully closed state.
  • the one or more parameter associated with a non-rigid shape transition from one of the two input image shapes to the other.
  • At least one of the input images is a 3D image.
  • the interpolated image is pre-computed prior to the acquisition of the 2D projection image or the interpolated image is computed after acquisition of the 2D projection image.
  • the object is a heart valve clip device or an artificial heart valve, or a stent or any other deformable device.
  • the proposed method and system allow an efficient implementation of realistically modelling actual deformations of real objects. This is achieved inter alia by considering a plurality of input models and not merely a single input model. Computational effort is focused on real-world deformation possibilities. No (or barely any) CPU time is "wasted" on computing shape states that are beyond the physical limits of the considered device or object. BRIEF DESCRIPTION OF THE DRAWINGS
  • Fig. 1 shows an imaging arrangement
  • Fig. 2 shows X-ray images of different shape states of a device
  • Fig. 3 shows a flowchart of an image processing method.
  • a medical device OB such as a stent, an ablation or lasso catheter for Electrophysiology procedures or a mitral clip is introduced via a catheter CAT into the interior of the patient.
  • a medical device OB such as a stent, an ablation or lasso catheter for Electrophysiology procedures or a mitral clip is introduced via a catheter CAT into the interior of the patient.
  • the medical devices listed before are merely exemplary and are not intended to limit what is proposed herein and any other, in particular medical device is envisaged herein.
  • Mitral clips are used to treat structural deficiencies in the human (or possibly animal) heart. More particularly, there is a condition that heart valves are no longer closing properly which has the undesirable effect that, during systolic action, a certain quantity of blood backflows ("regurgitates") through said valves into the atrium. This can lead to poor oxygenation which in turn can lead to a raft of other medical complications.
  • a mitral clip has been devised elsewhere which is inserted into the patient through a suitable access point (for example through the femoral artery) and is then advanced through the patient's vasculature by means of a deployment catheter into the human heart opposite the incompetent valve.
  • a suitable access point for example through the femoral artery
  • the clip is then applied to the heart valve's leaves to clip same together thereby avoiding or lessening the amount of regurgitation. It is of note that this grab-and-clip action is done during normal heart activity.
  • precise navigation through the vasculature and also precise positioning at the valve is a premium. Also, one normally needs to check whether the clip has been properly deployed at the correct spatial location, etc.
  • an X-ray imaging apparatus IMX operates to produce a series of fluoroscopic images (that is, X-ray projection images) that are displayed on a screen to the operator.
  • the fluoroscopic frames or images are used to help navigate the medical device OB through the patient and to oversee its correct application at the correct site.
  • there is a second image channel to supply additional imagery such as a 3D ultrasound TEE (Trans-esophageal
  • the imaging apparatus IMX acquires the projection imagery along a given projection direction which however is adjustable by moving a gantry G to which on one side an X-ray source XR is attached.
  • a radiation sensitive detector D One the other side of the gantry opposite the X-ray source and across an examination region is mounted a radiation sensitive detector D.
  • the patient PAT is located within the examination region or in particular the region of interest of the patient is ensured to be within the examination region so it can be flooded with X-ray radiation that is emitted by the X-ray source during an image acquisition.
  • the X-ray radiation passes through the region of interest.
  • the X-ray radiation after interaction with matter in the region of interest then interacts with the detector D.
  • the data acquisition circuitry converts electrical signals generated by the X-ray radiation (at the X-ray detector D) into respective X- ray frames or images.
  • absorption images are generated this way which encode as contrast density or attenuation differences in the region of interest for the given projection direction.
  • Other types of contrast such as spectral and phase contrast are also envisaged in some embodiments.
  • the proposed arrangement includes a navigation module NAV.
  • the navigation device NAV receives as its input a current 2D X-ray projection image (of the object OB resident in the patient) supplied by the X-ray imager IMX.
  • the navigation module NAV receives a single such frame or a single frame at a time.
  • the NAV then processes this 2D X- ray image and produces localization information, for instance three-dimensional coordinates p(x,y,z) that indicate the position in 3D of the device in the examination region.
  • the navigation module NAV as proposed herein is model-based.
  • the navigational module NAV does not only operated on the-read in X-ray frame but also on one or more image models M generated by a model generator MG whose operation will be explained in more detail below.
  • model generator MG may include input port IN and output port OUT. Operation of model generator MG is based on a "library" of two or more 3D prerecorded models M r , and Mk. Preferably, but not necessarily these pre-recorded model images have been generated in a preparatory phase prior to when the 2D projection X-ray image was generated. The images used as a basis for these pre-recorded models may have been acquired by using a 3D image modality such as a computed tomography scanner CT. A series of different image 3D volume blocks are generated to record the device OB (to be used in the intervention) in different shape states.
  • a 3D image modality such as a computed tomography scanner CT.
  • a series of different image 3D volume blocks are generated to record the device OB (to be used in the intervention) in different shape states.
  • an exemplary device OB is the previously mentioned mitral clip 202 deployable by its catheter 204.
  • Panes A-C show simulation imagery of such as clip 202.
  • a clip 202 has two jaws through which the clipping action can be implemented.
  • the jaws can be in an open state as shown in pane A and can be actuated to move into a closed state to exercise a pinch action to press the valve's leaflets together.
  • Pane C shows a state where the delivery catheter 204 is disconnected from the clip 202 which remains in situ once deployed.
  • panes A through C in Fig. 2 are for illustrative purposes only and in no way limiting.
  • Panes D-F show (from left to right) a similar transitioning of a mitral clip from an open shape state to a closed shape state.
  • Panes D-F are reproductions of actual fluoroscopic frames.
  • Panes A-F are illustrative only and not limiting.
  • Other devices may have other shape states than the ones shown in Fig. 2.
  • a "library” comprising a series of different 3D images can be generated each showing the object in a different shape state.
  • a segmenter module can then be used (not shown) to identify the different footprint shapes of the object in the respective 3D image blocks.
  • the segmented 3D image blocks than form the models M r , Mk which can be held in a database D or other memory to be readily accessible by the model generator through its input port IN.
  • the model generator is provided with an interpolator to produce a "fine-tuned "model Mj from the pre-recorded models M r , Mk.
  • Models include 3D representations of the structural footprint of the object of interest (in this case that of a medical device such as the clip or the catheter-clips system).
  • the 3D models are derivable in this manner for example via segmentation from 3D imagery including actual physical information relating to the object OB, for instance X-ray attenuation values. This helps achieve correct registration with 2D X-ray projections as will be explained below in greater detail.
  • the model generator MB uses a morphing algorithm or similar to describe a continuous transformation between the different shape states as recorded by the pre-recorded models M, and Mk.
  • the computed or synthesized intermediate model M j is representative of a deformed state of either one of the.
  • a computed model M j is to capture an estimate for an inter-state shape as one would obtain if one were to transform or deform continuously one of the shapes M r or Mk into the respective other model Mk or M r .
  • the newly computed model or intermediate model M j is then forwarded, along with the pre-recorded models M r or Mk , to the navigation unit NAV, either directly or via an output port of the model generator MG.
  • the navigation unit NAV includes a 2D-3D registration module RM.
  • Registration module RM attempts to 2D-3D register the received 2D X-ray frame (from X-ray imager IXM) of object OB with a best matching one of the pre-recorded and/or computed models ⁇ , 0 ⁇
  • the 2D-3D registration module would use only variations over the six spatial degrees of freedom, namely the three translations according to the spatial axis x,y,z and three respective rotations thereabout.
  • the deformation parameters include additional, “artificial” or “virtual” degrees of freedom to capture non-rigid transformation aspects between the various models to so find the best model to achieve a best 3D-2D registration result.
  • n being whatever number of parameters results from the morphing ansatz used.
  • the final values of the n deformation parameters are provided by the navigation module NAV as an optional (user-requestable) additional output with the three-dimensional orientation and the position coordinates p(x,y,z).
  • the deformation parameter(s) furnishes the operator with additional information about the state of a deformation of the device OB (e.g. opening angle of a mitral clip, extent of a stent expansion, etc.).
  • the parameters of the rigid motion may be output in addition or instead of the parameters for the non-rigid deformation part, in dependence on user request.
  • the coordinates and or the parameters (for the rigid motion or the deformation) may be shown in numerical form in one or more info-box window widgets overlaid on the current fluoroscopic frame on a display device.
  • the manner in which the goodness of the registration between the 2D image and the 3D models M rjj k is measured will depend on the algorithmic particulars of the registration algorithm used. For instance, some embodiments envisaged herein use an optimization approach where a cost function is set up that measures a deviation between the footprint of the device or object OB as recorded in the received X-ray image and a synthesized footprint.
  • the synthesized footprint is obtainable by forward-projecting across the respective model onto an imaging plane corresponding to that of the received X-ray image. This imaging plane may be defined by the current detector D position and orientation, and use the projection direction at which the received X-ray image has been recorded.
  • the inter-state shape model(s) Mj can be used to achieve a possibly better fit than if one were to restrict oneself only to the pre-recorded models M,, Mk. It will be appreciated that in the above, reference has been made to one intermediate image Mj although it will be clear to those skilled in the art that the proposed method allows computing an arbitrary number n of intermediate inter-shape images Mji M jn for any given pair of initial images M r and Mk.
  • the above described task of finding an intermediate image between two given images that represent deformation of either of the two initial images is essentially a task of interpolation and the proposed model generator MG includes an interpolator ITP to implement this interpolation.
  • the following morphing algorithm as described in D Cohen-Or et al "Three-Dimensional Distance Field Metamorphosis" ACM Transactions on Graphics, Vol 17, No 2, April, pp 116-141, (1998)) is used herein, although other deformation or geometrical transformation operations may also be used with benefit.
  • the location coordinates p for the in-situ device OB can be output by navigation module NAV.
  • the relevant object is placed into the examination region of the 3D imaging modality (e.g. CT) (without the patient residing in the examination region).
  • the different shape states are then realized by actuating the device to effect in steps the deformations in sufficiently small increments with the respective input images M r> k acquired for each deformation increment.
  • the imaging modality for the model image acquisition and for the 2D projection image acquisition are physically different, for instance the modality of the 2D X-ray may be a projective X-ray radiographic system and the 3D modality for acquisition of the models may be a CT scanner. But this may not be so necessarily for all embodiments, for instance 3D images and the 2D images during the intervention may be acquired by the same imaging equipment, for instance a modern C-arm X-ray imager may be so used to do this. In this case, where the same imager is used for the shape gathering phase and the actual intervention, the model images are acquired first, without the patient present. After the 3D images have been recorded to capture the different shape states of the relevant medical device OB, the patient is then admitted into the examination region and the intervention can then commence.
  • the modality of the 2D X-ray may be a projective X-ray radiographic system and the 3D modality for acquisition of the models may be a CT scanner.
  • 3D images and the 2D images during the intervention may be acquired by the
  • the input image/models M r> k are preferably acquired using the very same object prior to its use in the intervention. Strictly speaking this may not be required necessarily, and an essentially identical or at least sufficiently similar copy or duplicate of the to be used (or already in use) device can be taken instead for the shape gathering phase to obtain the model imagery M r> k. This may be useful for instance if the clip OB or device in question is already introduced into the patient and being advanced to its deployment site. One can then use the 3D imaging modality to acquire the model M r> kby using the duplicate clip and then forward the model to the model generator MG when needed.
  • FIG. 3 shows diagrammatically steps of an image processing method as implemented by the above arrangement in Fig. 1.
  • the input models represent different shapes of a deformable object OB, for instance a medical device.
  • the input images are preferably derived from 3D images of the deformable object OB recorded at different stages or states of deformation in a "shape gathering phase" prior to the intervention.
  • Such input imagery is obtainable by a tomographic scanner (CT) for instance or other 3D image modularity such as NMR.
  • CT tomographic scanner
  • a plurality of CT projection images obtained from different projection directions around the body are reconstructed (e.g., by filtered back- projection (FBP)) into a plurality of cross-section images that together from a 3D image block.
  • the block for the different shape states or deformation instances e.g., for the mitral clip, "open", “closed” state and one or more states between these two extreme states
  • These 3D blocks may then be segmented for image structure(s) that represent the object.
  • the so segmented blocks then constitute the received input images M,, Mk also referred to herein as input models.
  • an interpolated model is computed from the two input models.
  • the interpolated model is representative of an "estimate" of how the object shape may look like at a particular instance if one were to deform either the two object shapes (as recorded in the two input images) into the other.
  • the interpolated image is computed by varying one or more deformation parameters.
  • a morphing algorithm can be used such referenced above. Other algorithms are also envisaged.
  • Topologically, a, in the mathematical sense, continuous transformation is defined. This continuous transformation T is set up to map a shape index parameter s in a range [a,c] (bounded by a from below and by c from above) onto the two images. In practice, the interval can be normalized down to the unit interval [0,1].
  • interpolation image T(c) that captures a specific, "virtual", intermediate deformation state of either one of the object shapes as per two input images.
  • a surface model made up from surface points on and for each input model Mi,k.
  • pairs of corresponding surface points are being constructed either manually or by some geometric recognition module, e.g. local 3D patch matching around feature points like edges and corners.
  • Another option for automatic correspondence detection is an N-SIFT algorithm as described by W Cheung et al in "N-SIFT: N-DIMENSIONAL SCALE INVARIANT FEATURE
  • a line is then cast through each pair of corresponding points.
  • the so computed interpolated image Mj for one deformed shape is then output and may be made available for further image processing. It will be understood that the previous steps can be repeated for any other two input images and what is more, more than one interpolated image can be computed from any two input images. For instance, in the above mentioned linearized morphing scheme, the deformation parameter p may be varied at any suitable step width to produce any desired number of interpolated
  • an interpolated image Mj can be computed.
  • new intermediate models can be constructed for a pair of model images where at least one (or both) is itself a (previous) intermediate model although it expected that more accurate results are achievable when using input images which are based on actual imagery of the object.
  • computing the interpolated image form two input images is an exemplary embodiment as "higher-order" interpolated images are also envisaged where three or more input images are used to compute the (in general non-linear) interpolated image.
  • the one or more interpolated images are used in 3D navigation but other applications are also envisaged herein. For instance, one may use a graphic machine to generate a graphical rendering of the computed parameters as applied to a user-selectable one of the input images to visualize on a display device a respective deformation so as to predict how a certain shape state may look like.
  • a 2D projection X-ray image including a footprint of the object is received.
  • This 2D projection image was acquired at a known imaging geometry of an X-ray imager IXR, preferably a C-arm type X-ray image acquisition device, with the object present in the field of view of the X-ray imager.
  • a 2D-3D-registration is then attempted to register said footprint in the 2D projection image onto a best matching model i.e. the "model of best fit" from the, now extended, library ⁇ Myjc ⁇ of model images including the previous input Models Mi,Mk and the one or more interpolated image models M j obtained in step S320.
  • a cost function is defined that measures a deviation between the footprints in the projection plane of the forward -projection of the models M and the acquired 2D projection image.
  • the cost function may be formulated in terms of (possibly weighted) square sum of difference pixels or any other metric can be used. In other words, high cost means low correspondence which is undesirable.
  • the registration operation then proceeds in iterations.
  • An initial model for instance, one of the pre-recorded ones, M r say
  • M r the deviation of its forward-projection from the current 2D projection image is established using the cost function. If this cost less or than a pre-defined cost threshold, the method stops here and the pre-recorded model itself is found to achieve a good enough registration. The method then proceeds to output step S350.
  • a next cost is computed but this time the footprint of a (possibly) translated or rotated version of M r or of the forward- projected intermediate image Mj is used. That, is for the optimization, in general all parameters are varied, the rigid ones (rotation and/or translation) and the non-rigid deformation(s). If the deviation is less, that is, it incurs less cost than the initial image, but is still above the threshold, translated or rotated version of M r or a new intermediate image M 1(71 > 7) is computed at step S320, and so on until the iterations over the rigid translations and orientations and intermediate images abort.
  • the new intermediate image may be computed from the pair M r , Mj or M J; Mk and/or M r ,Mk. Abortion may be triggered for instance, if two consecutive costs differ by less than a predefined difference threshold ⁇ >0 and the latest cost is less than the cost threshold.
  • the intermediate model from the last iteration is then output as the model of best registration fit M*, with the understanding that this "best matching" model may not necessarily be a "global" optimum.
  • the new intermediate models may not be computed as above for each step but instead a pre-defined number of intermediate models may be pre- computed for a predefined step- width to so refine the existing model library.
  • n-intermediate models (n>l) between the or any two pre-recorded models are synthesized as per step S320.
  • the cost can then be computed simply for each model in this refined or enlarged model library, and the one with least cost is then returned as the optimal best fit registration.
  • a 3D location coordinate of the segmentation in the interpolated image model Mj of best fit (that has been registered onto the received 2D projection image)is then output.
  • the coordinates in the segmented structure of the interpolated best fit model Mj is then an estimate of the current 3D location for the in-situ device.
  • Other or additional optional outputs are the rigid rotation parameters to estimate the current 3D orientation for the in-situ device, and or the deformation parameters to estimate the current 3D shape of the in-situ device.
  • the 3D location coordinate may be selected automatically based on a feature selection. For, instance, if the object OB is catheter, it is often the catheter's tip that is used to define the location of the catheter itself. Due to the peculiar shape of the tip portion, this can be found relatively easily by automatic feature selection. In other embodiments, it is the user who selects (by mouse-click, touch screen or e-stylus action, or by way of any other suitable input means action) a desired model portion in the interpolated image of best registration fit and it is then the 3D coordinates to the so selected model portion that will be output as the location coordinate for the in-situ device.
  • a feature selection For, instance, if the object OB is catheter, it is often the catheter's tip that is used to define the location of the catheter itself. Due to the peculiar shape of the tip portion, this can be found relatively easily by automatic feature selection. In other embodiments, it is the user who selects (by mouse-click, touch screen or e-stylus action, or by way of any
  • the coordinates of any feature on the registered model correspond to the coordinates of the same feature of the in-situ object OB in the examination region. Therefore, for a sufficiently accurate registration, the output model coordinates will then approximate with good accuracy the real coordinates of the object in the examination region of the X-ray imager. It is assumed herein, that the coordinate scale of the input 3D image models (as supplied by the 3D imaging modality, e.g. CT) can be directly related to the 3D coordinate scale in the examination region of the X-ray imager IMX that supplies the 2D image of the in-situ object OB.
  • the 3D imaging modality e.g. CT
  • a localization method for deformable devices that is based on pre-interventional 3D imagery of a specific target device.
  • the imagery may be X-ray based such CT reconstructions although other modalities such as NMR are also envisaged.
  • From the 3D imagery data features of the relevant device are segmented or otherwise identified or extracted. In X-ray, this can be achieved by thresholding for different opacities.
  • a deformable model is created by using multiple 3D reconstructions of the device at different appearances or shape states.
  • a morphing algorithm is used to describe a continuous transformation between the different recorded appearances.
  • the deformable model is then registered to, for instance, a single X-ray projection image using additional non-rigid degrees of freedom (that is, in addition to the rigid DOFs of three translations and three rotational degrees) to describe a level of deformation between the different recorded appearances.
  • additional non-rigid degrees of freedom that is, in addition to the rigid DOFs of three translations and three rotational degrees
  • the proposed method is not restricted to the traditional rigid DOF (translational and rotation) to achieve a match between projection 3D models and recorded projection data.
  • the proposed method can be used for any deformable devices.
  • the types of deformable devices contemplated herein are those capable of undergoing non-rigid transformations, that is, there is at least a pair of points in the object whose mutual distance changes when the object is transformed.
  • the proposed method can also or alternatively be applied to instances where the device is decomposable or detachable into two separate parts.
  • the detached components can be modeled and registered separately and treated separately as per the above described method.
  • a detachable multi-component system (such as, but this exemplary only, the catheter-clip system) is modelled as a whole, that is, one uses model images to capture the whole of system with one of the deformation parameters describing the deformation by detachment/disconnection.
  • the gradual removal or detachment of the catheter from the clip is itself modeled by one of the deformation parameter as deformation aspect (on top of the open/closed states of the clip) and the method can be applied as described above.
  • the model generator GM and or the navigation module NAV may be arranged as a software module or routine with a suitable interfaces (such as input port IN and output port OUT) and may be run on a general purpose computing unit or a dedicated computing unit. For instance model generator GM may be executed on a workstation or operator console of the imaging system 100.
  • the model generator GM with some or all of its components may be resident on the executive agency (such as a general purpose computer, workstation or console) or may be accessed remotely/centrally by the executive agency via a suitable communication network in a distributed architecture.
  • the components may be implemented in any suitable programming language such as C++ or others.
  • the components of the model generator GM may be arranged as dedicated FPGAs (field-programmable gate array) or similar standalone chips.
  • a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.
  • the computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention.
  • This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above-described system.
  • the computing unit can be adapted to operate automatically and/or to execute the orders of a user.
  • a computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method of the invention.
  • This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
  • the computer program element might be able to provide all necessary steps to fulfill the procedure of an exemplary embodiment of the method as described above.
  • a computer readable medium such as a CD-ROM
  • the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
  • a computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
  • the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network.
  • a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A system and related method for use in navigation tasks. From 3D input images of an object (OB) in different shape states, an intermediate 3D image Mj is computed by an interpolator (INT) or a model generator (MG )for a parameter that describes a deformation between shapes as recorded in the input images Mi,k. This intermediate image is then registered onto a 2D projection image of the same or similar object to establish a location of said device in 3D by a navigation module (NAV).

Description

AUTOMATIC 3D MODEL BASED TRACKING OF DEFORMABLE MEDICAL
DEVICES WITH VARIABLE APPEARANCE
FIELD OF THE INVENTION
The invention relates to an image processing system, to an image processing method, to a computer program element, and to a computer readable medium. BACKGROUND OF THE INVENTION
In medicine, minimal invasive procedures are usually performed under live X- ray fluoroscopy guidance using an X-ray imaging system that supplies imagery in form of X- ray projections. Most medical devices are clearly visible on X-ray projections since they include metal structures or other radio-opaque components. A limitation of fluoroscopy- guided procedures is that the information provided by the X-ray apparatus is only two- dimensional. This limits accurate 3D device navigation within the patient since depth information is missing. What is more, some medical devices (such as mitral clips or artificial heart valves) are deformable which makes accurate tracking and 3D localization even more challenging.
SUMMARY OF THE INVENTION
There may therefore be a need for an image processing systems and/or method to support navigation tasks.
The object of the present invention is solved by the subject matter of the independent claims where further embodiments are incorporated in the dependent claims. It should be noted that the following described aspect of the invention equally apply to the image processing method, to the computer program element and to the computer readable medium.
According to a first aspect of the invention there is provided an image processing system comprising a model generator that includes an interpolator configured to compute, for a given one or more deformation parameters, from at least two pre-recorded models representing a deformable object in different shape states, at least one interpolated model representative of a deformed shape of the object. The image processing system further comprises a navigation module configured to determine a current 3D position of the object in an examination region from a current 2D X-ray projection image. The navigation module further includes a 2D-3D registration module configured to register a footprint of object in the current 2D X-ray projection image with a best matching one of the pre-recorded and interpolated models. Once the registration has been established, the 3D position of the object in the examination region can be accurately determined. Thus, a 3D position of the object can be obtained from a single 2D X-ray projection.
The shape as per the interpolated model is intermediate between the shapes as per the input models. The deformation parameter represents an additional degree of freedom (on top of a parameter(s) for a mere rigid rotation and/or translation) for deformations and allows the 2D/3D registration of devices that cannot be properly described by a single rigid state. A rigid simplification for such devices would lead to inaccurate 2D/3D registration results. In other words, as per the proposed system, one or more specific deformations are identified and included as an additional variation in the registration process. The non-rigid parameter may describe for instance an opening and/or closing of a clamp, an expansion of a stent, or a bending of a catheter (tip), or any other type of deformation. The one or more deformation parameters represent the transition between initial object shapes as recorded by the pre-recorded models.
According to one embodiment, the operation of the interpolator is based on a morphing algorithm.
According to one embodiment, said deformation parameter has an upper bound and a lower bound, wherein the lower bound is associated with one of the pre-recorded models and the upper bound is associated with the other one of the pre-recorded models, wherein said interpolator is capable of computing an interpolated image for any parameter in the range between the lower and upper bounds. Preferably, in this case, the pre-recorded models represent 'extreme' device state, for example a fully open and fully closed state.
According to one embodiment, the one or more parameter associated with a non-rigid shape transition from one of the two input image shapes to the other.
According to one embodiment, there are one or more further parameters associated with a rigid transformation.
According to one embodiment, at least one of the input images is a 3D image.
According to one embodiment, the interpolated image is pre-computed prior to the acquisition of the 2D projection image or the interpolated image is computed after acquisition of the 2D projection image. According to one embodiment, the object is a heart valve clip device or an artificial heart valve, or a stent or any other deformable device.
In sum, the proposed method and system allow an efficient implementation of realistically modelling actual deformations of real objects. This is achieved inter alia by considering a plurality of input models and not merely a single input model. Computational effort is focused on real-world deformation possibilities. No (or barely any) CPU time is "wasted" on computing shape states that are beyond the physical limits of the considered device or object. BRIEF DESCRIPTION OF THE DRAWINGS
Exemplary embodiments of the invention will now be described with reference to the following drawings wherein:
Fig. 1 shows an imaging arrangement;
Fig. 2 shows X-ray images of different shape states of a device;
Fig. 3 shows a flowchart of an image processing method.
DETAILED DESCRIPTION OF EMBODIMENTS
With reference to Fig. 1 there is shown an arrangement 100 for image-based support, in particular guidance, of an interventional procedure. In such a procedure, a medical device OB such as a stent, an ablation or lasso catheter for Electrophysiology procedures or a mitral clip is introduced via a catheter CAT into the interior of the patient. The medical devices listed before are merely exemplary and are not intended to limit what is proposed herein and any other, in particular medical device is envisaged herein.
For the sake of definiteness, the proposed arrangement will be explained at the example of a mitral clip intervention which again is purely exemplary and is not to limit what is disclosed herein. Mitral clips are used to treat structural deficiencies in the human (or possibly animal) heart. More particularly, there is a condition that heart valves are no longer closing properly which has the undesirable effect that, during systolic action, a certain quantity of blood backflows ("regurgitates") through said valves into the atrium. This can lead to poor oxygenation which in turn can lead to a raft of other medical complications. To at least somewhat remedy this, a mitral clip has been devised elsewhere which is inserted into the patient through a suitable access point (for example through the femoral artery) and is then advanced through the patient's vasculature by means of a deployment catheter into the human heart opposite the incompetent valve. By means of a mechanical deployment mechanism integrated in the catheter's tip, the clip is then applied to the heart valve's leaves to clip same together thereby avoiding or lessening the amount of regurgitation. It is of note that this grab-and-clip action is done during normal heart activity. For successful clip deployment, precise navigation through the vasculature and also precise positioning at the valve is a premium. Also, one normally needs to check whether the clip has been properly deployed at the correct spatial location, etc. The procedure is therefore carried out under fluoroscopic guidance where an X-ray imaging apparatus IMX operates to produce a series of fluoroscopic images (that is, X-ray projection images) that are displayed on a screen to the operator. In other words, the fluoroscopic frames or images are used to help navigate the medical device OB through the patient and to oversee its correct application at the correct site. In one embodiment, but not necessary all embodiments, there is a second image channel to supply additional imagery such as a 3D ultrasound TEE (Trans-esophageal
echocardiography) probe for better soft tissue visualization.
The imaging apparatus IMX acquires the projection imagery along a given projection direction which however is adjustable by moving a gantry G to which on one side an X-ray source XR is attached. One the other side of the gantry opposite the X-ray source and across an examination region is mounted a radiation sensitive detector D. The patient PAT is located within the examination region or in particular the region of interest of the patient is ensured to be within the examination region so it can be flooded with X-ray radiation that is emitted by the X-ray source during an image acquisition. The X-ray radiation passes through the region of interest. The X-ray radiation after interaction with matter in the region of interest then interacts with the detector D. The data acquisition circuitry converts electrical signals generated by the X-ray radiation (at the X-ray detector D) into respective X- ray frames or images. In some case, absorption images are generated this way which encode as contrast density or attenuation differences in the region of interest for the given projection direction. Other types of contrast such as spectral and phase contrast are also envisaged in some embodiments.
Now, in the above scenario, navigation in 3D is complicated by the fact that the supplied images are merely 2D. To help the operator precisely pinpoint the current position and orientation of the device OB in the 3D examination region, the proposed arrangement includes a navigation module NAV. The navigation device NAV receives as its input a current 2D X-ray projection image (of the object OB resident in the patient) supplied by the X-ray imager IMX. In particular, in one embodiment the navigation module NAV receives a single such frame or a single frame at a time. The NAV then processes this 2D X- ray image and produces localization information, for instance three-dimensional coordinates p(x,y,z) that indicate the position in 3D of the device in the examination region. The navigation module NAV as proposed herein is model-based. In other words, the navigational module NAV does not only operated on the-read in X-ray frame but also on one or more image models M generated by a model generator MG whose operation will be explained in more detail below.
Broadly, the model generator MG may include input port IN and output port OUT. Operation of model generator MG is based on a "library" of two or more 3D prerecorded models Mr, and Mk. Preferably, but not necessarily these pre-recorded model images have been generated in a preparatory phase prior to when the 2D projection X-ray image was generated. The images used as a basis for these pre-recorded models may have been acquired by using a 3D image modality such as a computed tomography scanner CT. A series of different image 3D volume blocks are generated to record the device OB (to be used in the intervention) in different shape states.
For instance, as shown in Fig. 2, an exemplary device OB is the previously mentioned mitral clip 202 deployable by its catheter 204. Panes A-C show simulation imagery of such as clip 202. As can be seen in pane A of Fig. 2, such a clip 202 has two jaws through which the clipping action can be implemented. The jaws can be in an open state as shown in pane A and can be actuated to move into a closed state to exercise a pinch action to press the valve's leaflets together. Pane C shows a state where the delivery catheter 204 is disconnected from the clip 202 which remains in situ once deployed. In other words, when the micro valve - catheter system is looked at as device as a whole, then one can see that it is capable of a number of different deformation states other than merely opened and closed, but also connected and disconnected/detached and combinations thereof (some of them excluded however, as a disconnected state of the system should imply a closed sate for the clip). As will be understood the shape changes as per panes A through C in Fig. 2 are for illustrative purposes only and in no way limiting. Panes D-F show (from left to right) a similar transitioning of a mitral clip from an open shape state to a closed shape state. Panes D-F are reproductions of actual fluoroscopic frames. Panes A-F are illustrative only and not limiting. Other devices may have other shape states than the ones shown in Fig. 2.
Referring back to the CT scanner CT in Fig. 1, a "library" comprising a series of different 3D images can be generated each showing the object in a different shape state. Taking advantage of the device's OB opacity a segmenter module can then be used (not shown) to identify the different footprint shapes of the object in the respective 3D image blocks. The segmented 3D image blocks than form the models Mr, Mk which can be held in a database D or other memory to be readily accessible by the model generator through its input port IN. The model generator is provided with an interpolator to produce a "fine-tuned "model Mj from the pre-recorded models Mr, Mk. More particularly, the model generator allows fine-tuning the existing library of pre-recorded models. "Models" as used herein include 3D representations of the structural footprint of the object of interest (in this case that of a medical device such as the clip or the catheter-clips system). The 3D models are derivable in this manner for example via segmentation from 3D imagery including actual physical information relating to the object OB, for instance X-ray attenuation values. This helps achieve correct registration with 2D X-ray projections as will be explained below in greater detail.
In one embodiment, the model generator MB uses a morphing algorithm or similar to describe a continuous transformation between the different shape states as recorded by the pre-recorded models M, and Mk. In yet other words, the computed or synthesized intermediate model Mj is representative of a deformed state of either one of the. In yet other words, a computed model Mj is to capture an estimate for an inter-state shape as one would obtain if one were to transform or deform continuously one of the shapes Mr or Mk into the respective other model Mk or Mr.
The newly computed model or intermediate model Mj is then forwarded, along with the pre-recorded models Mr or Mk , to the navigation unit NAV, either directly or via an output port of the model generator MG. The navigation unit NAV includes a 2D-3D registration module RM. Registration module RM then attempts to 2D-3D register the received 2D X-ray frame (from X-ray imager IXM) of object OB with a best matching one of the pre-recorded and/or computed models Μ,0^ Ordinarily, the 2D-3D registration module would use only variations over the six spatial degrees of freedom, namely the three translations according to the spatial axis x,y,z and three respective rotations thereabout. According to the proposed method herein, in addition to those spatial "rigid" parameters, the deformation parameters include additional, "artificial" or "virtual" degrees of freedom to capture non-rigid transformation aspects between the various models to so find the best model to achieve a best 3D-2D registration result. So, in total, we propose in general to use 3+3+ n (n>l) degrees of freedom for the registration, n being whatever number of parameters results from the morphing ansatz used. In some embodiments, after the registration, the final values of the n deformation parameters are provided by the navigation module NAV as an optional (user-requestable) additional output with the three-dimensional orientation and the position coordinates p(x,y,z). The deformation parameter(s) furnishes the operator with additional information about the state of a deformation of the device OB (e.g. opening angle of a mitral clip, extent of a stent expansion, etc.). Also, the parameters of the rigid motion may be output in addition or instead of the parameters for the non-rigid deformation part, in dependence on user request. The coordinates and or the parameters (for the rigid motion or the deformation) may be shown in numerical form in one or more info-box window widgets overlaid on the current fluoroscopic frame on a display device.
The manner in which the goodness of the registration between the 2D image and the 3D models Mrjjk is measured will depend on the algorithmic particulars of the registration algorithm used. For instance, some embodiments envisaged herein use an optimization approach where a cost function is set up that measures a deviation between the footprint of the device or object OB as recorded in the received X-ray image and a synthesized footprint. The synthesized footprint is obtainable by forward-projecting across the respective model onto an imaging plane corresponding to that of the received X-ray image. This imaging plane may be defined by the current detector D position and orientation, and use the projection direction at which the received X-ray image has been recorded.
Because the proposed method allows enlarging the stock pile of 3D models available, the inter-state shape model(s) Mj can be used to achieve a possibly better fit than if one were to restrict oneself only to the pre-recorded models M,, Mk. It will be appreciated that in the above, reference has been made to one intermediate image Mj although it will be clear to those skilled in the art that the proposed method allows computing an arbitrary number n of intermediate inter-shape images Mji Mjn for any given pair of initial images Mr and Mk. The above described task of finding an intermediate image between two given images that represent deformation of either of the two initial images is essentially a task of interpolation and the proposed model generator MG includes an interpolator ITP to implement this interpolation.
According to one embodiment, the following morphing algorithm as described in D Cohen-Or et al "Three-Dimensional Distance Field Metamorphosis" ACM Transactions on Graphics, Vol 17, No 2, April, pp 116-141, (1998)) is used herein, although other deformation or geometrical transformation operations may also be used with benefit. Once the intermediate model Mj has been found that yields a "best" or better registration result, the location coordinates p for the in-situ device OB can be output by navigation module NAV. This is then a simple look-up exercise in establishing for instance a tip position (or any other suitable reference point)of the segmentation as per the used model Mj and to use the 3D coordinates in said block as the location coordinates for the "real", in-situ device itself.
Procedurally, the relevant object is placed into the examination region of the 3D imaging modality (e.g. CT) (without the patient residing in the examination region). The different shape states are then realized by actuating the device to effect in steps the deformations in sufficiently small increments with the respective input images Mr>k acquired for each deformation increment.
In some embodiment, the imaging modality for the model image acquisition and for the 2D projection image acquisition are physically different, for instance the modality of the 2D X-ray may be a projective X-ray radiographic system and the 3D modality for acquisition of the models may be a CT scanner. But this may not be so necessarily for all embodiments, for instance 3D images and the 2D images during the intervention may be acquired by the same imaging equipment, for instance a modern C-arm X-ray imager may be so used to do this. In this case, where the same imager is used for the shape gathering phase and the actual intervention, the model images are acquired first, without the patient present. After the 3D images have been recorded to capture the different shape states of the relevant medical device OB, the patient is then admitted into the examination region and the intervention can then commence.
The input image/models Mr>k are preferably acquired using the very same object prior to its use in the intervention. Strictly speaking this may not be required necessarily, and an essentially identical or at least sufficiently similar copy or duplicate of the to be used (or already in use) device can be taken instead for the shape gathering phase to obtain the model imagery Mr>k. This may be useful for instance if the clip OB or device in question is already introduced into the patient and being advanced to its deployment site. One can then use the 3D imaging modality to acquire the model Mr>kby using the duplicate clip and then forward the model to the model generator MG when needed.
The flowchart in Fig. 3 shows diagrammatically steps of an image processing method as implemented by the above arrangement in Fig. 1.
At step, S310, (at least) two input models M,, Mk are received. The input models represent different shapes of a deformable object OB, for instance a medical device. The input images are preferably derived from 3D images of the deformable object OB recorded at different stages or states of deformation in a "shape gathering phase" prior to the intervention. Such input imagery is obtainable by a tomographic scanner (CT) for instance or other 3D image modularity such as NMR. In, CT, a plurality of CT projection images obtained from different projection directions around the body (preferably, but necessarily, in a full revolution or at least over an 180° arc) are reconstructed (e.g., by filtered back- projection (FBP)) into a plurality of cross-section images that together from a 3D image block. The block for the different shape states or deformation instances (e.g., for the mitral clip, "open", "closed" state and one or more states between these two extreme states) then represent respective "snapshots" of object OB at their respect deformation states. These 3D blocks may then be segmented for image structure(s) that represent the object. The so segmented blocks then constitute the received input images M,, Mk also referred to herein as input models. Although having merely two such input models may be sufficient in some applications, usually there is a larger number of models available in the 10s or even in the 100s for complex cases.
At step 320, an interpolated model is computed from the two input models. The interpolated model is representative of an "estimate" of how the object shape may look like at a particular instance if one were to deform either the two object shapes (as recorded in the two input images) into the other. The interpolated image is computed by varying one or more deformation parameters. A morphing algorithm can be used such referenced above. Other algorithms are also envisaged. Topologically, a, in the mathematical sense, continuous transformation is defined. This continuous transformation T is set up to map a shape index parameter s in a range [a,c] (bounded by a from below and by c from above) onto the two images. In practice, the interval can be normalized down to the unit interval [0,1]. The lower parameter value, say a=0, is then mapped on, say, the first input model M, and the upper bound value c=l of the parameter is then mapped on the second input model Mk. An arbitrary value of parameter s=b in between the two values a, c can then return a respective
interpolation image T(c) that captures a specific, "virtual", intermediate deformation state of either one of the object shapes as per two input images. For instance, in a "linear" morphing approach, one may construct a surface model made up from surface points on and for each input model Mi,k. On the surface of the two (or more) models Mi,k, pairs of corresponding surface points are being constructed either manually or by some geometric recognition module, e.g. local 3D patch matching around feature points like edges and corners. Another option for automatic correspondence detection is an N-SIFT algorithm as described by W Cheung et al in "N-SIFT: N-DIMENSIONAL SCALE INVARIANT FEATURE
TRANSFORM FOR MATCHING MEDICAL IMAGES" (in 4th IEEE International
Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2007, pp 720-723) or a local restriction thereof. A line is then cast through each pair of corresponding points. One can then obtain estimates or synthesized intermediate shapes by defining an intermediate point on each respective line say, at s=p% (e.g. s=50%, that is, half way) between the two points of the respective pair, where p parameterizes the respective lines. In this way one can define a surface model of the intermediate shape if one carries out a linear interpolation on the line between each pair of surface points. It will be understood that the segmentations from the input images feed-through into the intermediate image so no new segmentation (of the intermediate image) is required.
At step S330, the so computed interpolated image Mj for one deformed shape is then output and may be made available for further image processing. It will be understood that the previous steps can be repeated for any other two input images and what is more, more than one interpolated image can be computed from any two input images. For instance, in the above mentioned linearized morphing scheme, the deformation parameter p may be varied at any suitable step width to produce any desired number of interpolated
images/models along the respective lines connecting the surface models. In other words, for any given parameter p, an interpolated image Mj can be computed. Also, new intermediate models can be constructed for a pair of model images where at least one (or both) is itself a (previous) intermediate model although it expected that more accurate results are achievable when using input images which are based on actual imagery of the object. Also it should be appreciated that computing the interpolated image form two input images is an exemplary embodiment as "higher-order" interpolated images are also envisaged where three or more input images are used to compute the (in general non-linear) interpolated image.
In one embodiment, the one or more interpolated images (representative of intermediate shape states) are used in 3D navigation but other applications are also envisaged herein. For instance, one may use a graphic machine to generate a graphical rendering of the computed parameters as applied to a user-selectable one of the input images to visualize on a display device a respective deformation so as to predict how a certain shape state may look like.
To this end, at step S340, a 2D projection X-ray image including a footprint of the object is received. This 2D projection image was acquired at a known imaging geometry of an X-ray imager IXR, preferably a C-arm type X-ray image acquisition device, with the object present in the field of view of the X-ray imager. A 2D-3D-registration is then attempted to register said footprint in the 2D projection image onto a best matching model i.e. the "model of best fit" from the, now extended, library {Myjc} of model images including the previous input Models Mi,Mk and the one or more interpolated image models Mj obtained in step S320. Because of the model library now including also intermediate shapes and because any such intermediate shape models can be computed at step S320, more accurate 3D-2D registration can be achieved in the proposed "dynamic" approach as compared to a "static" approach restricted to a selection of pre-recorded input images Mr and Mk.
In one embodiment, for the registration, a cost function is defined that measures a deviation between the footprints in the projection plane of the forward -projection of the models M and the acquired 2D projection image. The cost function may be formulated in terms of (possibly weighted) square sum of difference pixels or any other metric can be used. In other words, high cost means low correspondence which is undesirable.
Preferably, the registration operation then proceeds in iterations. An initial model (for instance, one of the pre-recorded ones, Mr say) is read-in and the deviation of its forward-projection from the current 2D projection image is established using the cost function. If this cost less or than a pre-defined cost threshold, the method stops here and the pre-recorded model itself is found to achieve a good enough registration. The method then proceeds to output step S350.
If the cost exceeds or equals the pre-defined cost, a next cost is computed but this time the footprint of a (possibly) translated or rotated version of Mr or of the forward- projected intermediate image Mj is used. That, is for the optimization, in general all parameters are varied, the rigid ones (rotation and/or translation) and the non-rigid deformation(s).If the deviation is less, that is, it incurs less cost than the initial image, but is still above the threshold, translated or rotated version of Mr or a new intermediate image M 1(71 > 7) is computed at step S320, and so on until the iterations over the rigid translations and orientations and intermediate images abort. The new intermediate image may be computed from the pair Mr, Mj or MJ;Mk and/or Mr,Mk. Abortion may be triggered for instance, if two consecutive costs differ by less than a predefined difference threshold ε>0 and the latest cost is less than the cost threshold. The intermediate model from the last iteration is then output as the model of best registration fit M*, with the understanding that this "best matching" model may not necessarily be a "global" optimum.
In one embodiment, the new intermediate models may not be computed as above for each step but instead a pre-defined number of intermediate models may be pre- computed for a predefined step- width to so refine the existing model library. In other words, for some or any two pre-recorded model, n-intermediate models (n>l) between the or any two pre-recorded models are synthesized as per step S320. The cost can then be computed simply for each model in this refined or enlarged model library, and the one with least cost is then returned as the optimal best fit registration.
At step S350, a 3D location coordinate of the segmentation in the interpolated image model Mj of best fit (that has been registered onto the received 2D projection image)is then output. The coordinates in the segmented structure of the interpolated best fit model Mj is then an estimate of the current 3D location for the in-situ device. Other or additional optional outputs are the rigid rotation parameters to estimate the current 3D orientation for the in-situ device, and or the deformation parameters to estimate the current 3D shape of the in-situ device.
The 3D location coordinate may be selected automatically based on a feature selection. For, instance, if the object OB is catheter, it is often the catheter's tip that is used to define the location of the catheter itself. Due to the peculiar shape of the tip portion, this can be found relatively easily by automatic feature selection. In other embodiments, it is the user who selects (by mouse-click, touch screen or e-stylus action, or by way of any other suitable input means action) a desired model portion in the interpolated image of best registration fit and it is then the 3D coordinates to the so selected model portion that will be output as the location coordinate for the in-situ device. Since the forward-projection of any model is calculated in the coordinate system of the examination region (which is identical to the coordinate system of the X-ray imager IMX) the coordinates of any feature on the registered model (e.g. its tip) correspond to the coordinates of the same feature of the in-situ object OB in the examination region. Therefore, for a sufficiently accurate registration, the output model coordinates will then approximate with good accuracy the real coordinates of the object in the examination region of the X-ray imager. It is assumed herein, that the coordinate scale of the input 3D image models (as supplied by the 3D imaging modality, e.g. CT) can be directly related to the 3D coordinate scale in the examination region of the X-ray imager IMX that supplies the 2D image of the in-situ object OB.
In sum, we propose a localization method for deformable devices that is based on pre-interventional 3D imagery of a specific target device. The imagery may be X-ray based such CT reconstructions although other modalities such as NMR are also envisaged. From the 3D imagery data features of the relevant device are segmented or otherwise identified or extracted. In X-ray, this can be achieved by thresholding for different opacities. A deformable model is created by using multiple 3D reconstructions of the device at different appearances or shape states. A morphing algorithm is used to describe a continuous transformation between the different recorded appearances. The deformable model is then registered to, for instance, a single X-ray projection image using additional non-rigid degrees of freedom (that is, in addition to the rigid DOFs of three translations and three rotational degrees) to describe a level of deformation between the different recorded appearances. In other words, the proposed method is not restricted to the traditional rigid DOF (translational and rotation) to achieve a match between projection 3D models and recorded projection data.
With reference to Fig. 2, the proposed method can be used for any deformable devices. In particular, among the types of deformable devices contemplated herein are those capable of undergoing non-rigid transformations, that is, there is at least a pair of points in the object whose mutual distance changes when the object is transformed.
The proposed method can also or alternatively be applied to instances where the device is decomposable or detachable into two separate parts. In this instant, as for example shown in panes C,F of Fig. 2 above at the example of the mitral clip which is detachable from its deployment catheter, the detached components can be modeled and registered separately and treated separately as per the above described method. However, in another embodiment a detachable multi-component system (such as, but this exemplary only, the catheter-clip system) is modelled as a whole, that is, one uses model images to capture the whole of system with one of the deformation parameters describing the deformation by detachment/disconnection. In other words the gradual removal or detachment of the catheter from the clip is itself modeled by one of the deformation parameter as deformation aspect (on top of the open/closed states of the clip) and the method can be applied as described above.
The model generator GM and or the navigation module NAV may be arranged as a software module or routine with a suitable interfaces (such as input port IN and output port OUT) and may be run on a general purpose computing unit or a dedicated computing unit. For instance model generator GM may be executed on a workstation or operator console of the imaging system 100. The model generator GM with some or all of its components may be resident on the executive agency (such as a general purpose computer, workstation or console) or may be accessed remotely/centrally by the executive agency via a suitable communication network in a distributed architecture. The components may be implemented in any suitable programming language such as C++ or others.
In one embodiment, the components of the model generator GM may be arranged as dedicated FPGAs (field-programmable gate array) or similar standalone chips.
In another exemplary embodiment of the present invention, a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.
The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention. This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above-described system. The computing unit can be adapted to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method of the invention.
This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
Further on, the computer program element might be able to provide all necessary steps to fulfill the procedure of an exemplary embodiment of the method as described above.
According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, is presented wherein the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to the device type claims. However, a person skilled in the art will gather from the above and the following description that, unless otherwise notified, in addition to any combination of features belonging to one type of subject matter also any combination between features relating to different subject matters is considered to be disclosed with this application.
However, all features can be combined providing synergetic effects that are more than the simple summation of the features.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing a claimed invention, from a study of the drawings, the disclosure, and the dependent claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items re-cited in the claims. The mere fact that certain measures are re-cited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. Image processing system (IPS) comprising:
a model generator (MG) and
a navigation module (NAV),
wherein the model generator (MG) comprises an interpolator (ITP) configured to compute, for a given one or more deformation parameters, from at least two pre-recorded models (Mi, Mk) representing a deformable object (OB) in different shape states, at least one interpolated model (Mj) representative of a deformed shape of the object (OB),
and wherein the navigation module (NAV) is configured to determine a current 3D position of the object (OB) in an examination region from a current 2D X-ray projection image, the navigation module (NAV) further including:
a 2D-3D registration module (RM) configured to register a footprint of object (OB) in the current 2D X-ray projection image with a best matching one of the pre-recorded and interpolated models M^k.
2. Image processing system of claim 1, wherein the registration module (RM) is configured to measure a deviation between the footprint of the object (OB) as recorded in the current 2D X-ray projection image and a synthesized footprint of the object (OB) obtainable by forward projecting a respective one of the models M^k
3. Image processing system of claim 2, wherein the synthesized footprint is obtainable along a projection direction at which the current 2D X-ray projection image has been recorded.
4. Image processing system of any one of the preceding claims, wherein the navigation module (NAV) is configured to compute, based on the registered footprint, a 3D location coordinate for the object in an examination region between a detector (D) and an X- ray source (XR) of an X-ray imaging apparatus (IMX).
5. Image processing system of any one of the previous claims, wherein operation of the interpolator (ITP) is based on a morphing algorithm.
6. Image processing system of any one of the previous claims, wherein said one or more deformation parameters have an upper bound and a lower bound, wherein the lower bound is associated with a first one M, of the pre-recorded models and the upper bound is associated with the other one Mk of the pre-recorded models, wherein said interpolator (ITP) is adapted for computing an interpolated image for any deformation parameter in a range between the upper bound and the lower bound.
7. Image processing system of any one of the previous claims, wherein the one or more deformation parameters are associated with a non-rigid shape transition of the object (OB) from one of the two pre-recorded models to the other.
8. Image processing system of claim 6 or 7 wherein at least one of the input images is a 3D image.
9. Image processing system of any one of the previous claims, wherein the interpolated image is pre-computed prior to the acquisition of the 2D projection image or the interpolated image is computed after acquisition of the 2D projection image.
10. Image processing system of any one of the previous claims, wherein the object is a mitral clip, and the pre-recorded models represent an open state and a closed state of said mitral clip.
1 1. Image processing method comprising:
receiving (S310) at least two pre-recorded models of a deformable object, the models representing the object in two different shapes;
computing (S320), for one or more given deformation parameters, from the two pre-recorded models an interpolated model representative of a deformed shape of the object ;
registering (S340) a footprint of the object (OB) in a current 2D X-ray projection image with a best matching one of the pre-recorded and interpolated models Myjc, and determining (S350) a current 3D position of the object (OB) in an examination region based on the registration.
12. A computer program element for controlling a system according to any one of claims 1-10, which, when being executed by a processing unit is adapted to perform the method steps of claims 1 1.
A computer readable medium having stored thereon the program element of
PCT/EP2016/053556 2015-02-20 2016-02-19 Automatic 3d model based tracking of deformable medical devices with variable appearance WO2016131955A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP15155908.5 2015-02-20
EP15155908 2015-02-20

Publications (1)

Publication Number Publication Date
WO2016131955A1 true WO2016131955A1 (en) 2016-08-25

Family

ID=52705932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/053556 WO2016131955A1 (en) 2015-02-20 2016-02-19 Automatic 3d model based tracking of deformable medical devices with variable appearance

Country Status (1)

Country Link
WO (1) WO2016131955A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070205A1 (en) * 2015-10-23 2017-04-27 Wisconsin Alumni Research Foundation System and method for dynamic device tracking using medical imaging systems
US20220211440A1 (en) * 2021-01-06 2022-07-07 Siemens Healthcare Gmbh Camera-Assisted Image-Guided Medical Intervention

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011031134A1 (en) * 2009-09-14 2011-03-17 Erasmus University Medical Center Rotterdam Image processing method and system
US8649555B1 (en) * 2009-02-18 2014-02-11 Lucasfilm Entertainment Company Ltd. Visual tracking framework

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649555B1 (en) * 2009-02-18 2014-02-11 Lucasfilm Entertainment Company Ltd. Visual tracking framework
WO2011031134A1 (en) * 2009-09-14 2011-03-17 Erasmus University Medical Center Rotterdam Image processing method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BERLINGER KAJETAN: "Fiducial-Less Compensation of Breathing Motion in Extracranial Radiosurgery", INTERNET CITATION, 2006, pages 1 - 135, XP002566066, Retrieved from the Internet <URL:http://deposit.d-nb.de/cgi-bin/dokserv?idn=985158697&dok_var=d1&dok_ext=pdf&filename=985158697.pdf> [retrieved on 20100129] *
COHEN-OR D ET AL: "THREE-DIMENSIONAL DISTANCE FIELD METAMORPHOSIS", ACM TRANSACTIONS ON GRAPHICS (TOG), ACM, US, vol. 17, no. 2, 1 April 1998 (1998-04-01), pages 116 - 141, XP000754616, ISSN: 0730-0301, DOI: 10.1145/274363.274366 *
LEONARDI VALENTIN ET AL: "Multiple reconstruction and dynamic modeling of 3D digital objects using a morphing approach", VISUAL COMPUTER, SPRINGER, BERLIN, DE, vol. 31, no. 5, 3 June 2014 (2014-06-03), pages 557 - 574, XP035492423, ISSN: 0178-2789, [retrieved on 20140603], DOI: 10.1007/S00371-014-0978-6 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070205A1 (en) * 2015-10-23 2017-04-27 Wisconsin Alumni Research Foundation System and method for dynamic device tracking using medical imaging systems
US9652862B1 (en) 2015-10-23 2017-05-16 Wisconsin Alumni Research Foundation System and method for dynamic device tracking using medical imaging systems
US20220211440A1 (en) * 2021-01-06 2022-07-07 Siemens Healthcare Gmbh Camera-Assisted Image-Guided Medical Intervention

Similar Documents

Publication Publication Date Title
JP7440534B2 (en) Spatial registration of tracking system and images using 2D image projection
CN109589170B (en) Left atrial appendage closure guidance in medical imaging
Haouchine et al. Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery
JP6745879B2 (en) System for tracking an ultrasound probe in a body part
JP6448972B2 (en) Medical image processing apparatus and medical image processing method
CN103687541B (en) Visualization for navigation guidance
JP6936882B2 (en) Medical viewing system with viewing surface determination
US10052032B2 (en) Stenosis therapy planning
Song et al. Locally rigid, vessel-based registration for laparoscopic liver surgery
BR112012013706B1 (en) METHOD FOR PROCESSING AN X-RAY IMAGE AND SYSTEM FOR A COMBINATION OF ULTRASOUND AND X-RAY IMAGES
US20180189966A1 (en) System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation
JP6620252B2 (en) Correction of probe induced deformation in ultrasonic fusion imaging system
CN108369736A (en) Method and system for the tissue volume for calculating excision according to image data in 2D/2.5D arts
CN104272348B (en) For the imaging device and method being imaged to object
JP2014140742A (en) Method and apparatus for tracking object in target area of moving organ
JP7528082B2 (en) Methods, devices and systems for planning intracavity probe procedures - Patents.com
JP2009000509A (en) Edge detection in ultrasonic images
US11382603B2 (en) System and methods for performing biomechanically driven image registration using ultrasound elastography
JP6400725B2 (en) Image processing apparatus and method for segmenting a region of interest
EP1692661B1 (en) Three-dimensional reconstruction of an object from projection photographs
CN108885797B (en) Imaging system and method
Wein et al. Automatic non-linear mapping of pre-procedure CT volumes to 3D ultrasound
Groher et al. Planning and intraoperative visualization of liver catheterizations: new CTA protocol and 2D-3D registration method
WO2016131955A1 (en) Automatic 3d model based tracking of deformable medical devices with variable appearance
JP2015036084A (en) Image processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16705520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16705520

Country of ref document: EP

Kind code of ref document: A1