WO2014127321A2 - Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery - Google Patents

Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery Download PDF

Info

Publication number
WO2014127321A2
WO2014127321A2 PCT/US2014/016686 US2014016686W WO2014127321A2 WO 2014127321 A2 WO2014127321 A2 WO 2014127321A2 US 2014016686 W US2014016686 W US 2014016686W WO 2014127321 A2 WO2014127321 A2 WO 2014127321A2
Authority
WO
WIPO (PCT)
Prior art keywords
operative image
biomechanical
operative
intra
image
Prior art date
Application number
PCT/US2014/016686
Other languages
French (fr)
Other versions
WO2014127321A3 (en
Inventor
Ozan Oktay
Li Zhang
Peter Mountney
Tommaso Mansi
Philip Mewes
Original Assignee
Siemens Aktiengesellschaft
Siemens Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft, Siemens Corporation filed Critical Siemens Aktiengesellschaft
Publication of WO2014127321A2 publication Critical patent/WO2014127321A2/en
Publication of WO2014127321A3 publication Critical patent/WO2014127321A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for registering pre-operative images to intra-operative images using a biomechanically driven registration framework.
  • the technology is particularly well-suited to, but not limited to, minimally invasive surgical applications which utilize three-dimensional models generated intra-operatively, for example, using a rotational angiography system.
  • a laparoscopic camera is used to provide the surgeon with a visualization of the anatomical area of interest which may enhance the safety and accuracy of the procedure. For example, when removing a tumor, the surgeon's goal is to safely remove the tumor without damaging critical structures such as vessels.
  • the laparoscopic camera can only visualize the surface of the tissue. This makes localizing sub-surface structures, such as vessels and tumors, challenging. Therefore, intra-operative 3D images are introduced to provide updated information. While the intra-operative images typically have limited image information due to the constraints imposed in operating rooms, the pre-operative images can provide supplementary anatomical and functional details, and carry accurate segmentation of organs, vessels, and tumors. To bridge the gap between surgical plans and laparoscopic images, registration of pre- and intraoperative 3D images is needed. However, this registration is challenging due to liquid injection or gas insufflation, breathing motion, and other surgical preparation which results in large organ deformation and sliding between viscera and abdominal wall. Therefore, a standard non-rigid registration method cannot be directly applied and enhanced registration techniques which account for deformation and sliding are needed.
  • Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks, by providing methods, systems, articles of manufacture, and apparatuses for registration of pre-operative images to intra-operative images driven by biomechanical modeling of abdomen deformation resulting from, for example, gas insufflation or liquid injection.
  • a coupling between the registration and an insufflation model is achieved by optimizing the intensity similarity measure between the modeled pre-operative image and the intra-operative image.
  • the critical structures and target anatomy may be accurately overlaid on the tissue and subsurface features may be visualized.
  • the techniques discussed herein may be used, for example, to provide enhanced visualization to surgeons during minimally invasive surgical operations.
  • a computer- implemented method of using a biomechanical model to constrain registration of a preoperative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest includes initializing a plurality of model parameters comprising biomechanical parameter values. Next, in some embodiments, a rigid registration of the preoperative image to the intra-operative image may be performed. Then, an iterative optimization process is performed a plurality of times. In one embodiment, the iterative optimization process includes four stages. First, a biomechanical displacement field is generated by applying the biomechanical model to the pre-operative image using the model parameters. Second, the biomechanical displacement field is applied to the pre-operative image to generate a biomechanically deformed pre-operative image.
  • the biomechanically deformed pre-operative image is compared with the intra-operative image to yield a similarity measurement value.
  • the similarity measurement value is used to update the model parameters for a subsequent iteration.
  • a diffeomorphic registration of the biomechanically deformed pre-operative image to the intra-operative image is performed.
  • the biomechanically deformed pre-operative image is presented overlaying the intra-operative image on a display.
  • the biomechanical displacement field used in the aforementioned method may correspond to displacement from various fluids including, for example, liquids, gases, or a combination of liquids and gases.
  • the biomechanical displacement field corresponds to displacement of a portion of the anatomical area of interest (e.g., an abdominal wall region) due to gas insufflation or liquid injection.
  • a pressure value associated with the fluid may be used to generate the biomechanical displacement field.
  • a plurality of mesh elements may be identified based on the pre-operative image, then a plurality of surface points may be identified on each of the mesh elements.
  • the biomechanical model applies an insufflated gas pressure value to the surface points to generate the biomechanical displacement field.
  • the biomechanical model applies a liquid injection value to the surface points to generate the biomechanical displacement field.
  • the model parameters used in the aforementioned method further comprises one or more dynamic parameter values.
  • the method further comprises receiving a pressure measurement value from an insufflation device and the dynamic parameter values further include the pressure measurement value.
  • the dynamic parameter values comprise one or more hemodynamic parameter values.
  • the dynamic parameter values may be utilized in additional ways.
  • the method further includes determining a sensitive tissue location in the anatomical area of interest based on the dynamic parameter values and presenting the sensitive tissue location on the display.
  • the method includes determining a time to perform a new intra-operative scan based on the dynamic parameter values and performing the new intra-operative scan at the time to acquire an updated intra-operative image.
  • a computer- implemented method of using a biomechanical model to constrain registration of a preoperative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest includes initializing a plurality of model parameters comprising mechanical parameter values.
  • these model parameters include an insufflated gas pressure value or a liquid pressure value.
  • the model parameters further comprise one or more dynamic parameter values such as, for example, one or more hemodynamic parameter values.
  • a rigid registration of the pre-operative image to the intra-operative image is performed. Then, a biomechanically deformed pre-operative image is generated by applying the biomechanical model to the pre-operative image.
  • a statistical gradient of intensity similarity value is computed based on the biomechanically deformed pre-operative image and the intra-operative image. This statistical gradient of intensity similarity value is then used to update the plurality of model parameters.
  • a registered pre-operative image is generated by applying a diffeomorphic model to the biomechanically deformed pre-operative image. Then, the registered pre-operative image is presented overlaying the intra-operative image on a display.
  • the aforementioned method further includes segmenting the pre-operative image into a plurality of anatomical regions and generating a plurality of mesh elements, each mesh element associated with one of the anatomical regions.
  • the biomechanically deformed pre-operative image is generated by applying the biomechanical model to the pre-operative image according to two steps. First, the biomechanical model is used to generate a biomechanical gas insufflation displacement field based on the plurality of mesh elements and the plurality of model parameters. Then, the biomechanical gas insufflation displacement field is applied to the plurality of mesh elements to yield a plurality of deformed mesh elements. Alternatively, if liquid injection is used the biomechanical model to may be used generate a biomechanical liquid injection displacement field based on the mesh elements and the model parameters. Then, the biomechanical liquid injection displacement field may be applied to the mesh elements to yield the deformed mesh elements.
  • the registered pre-operative image is generated in the aforementioned method in two steps. First, the diffeomorphic model is used to generate a diffeomorphic displacement field based on the deformed mesh elements and the intraoperative image. Next, the diffeomorphic displacement field is applied to the deformed mesh elements to yield a plurality of warped mesh elements. The registered pre-operative image then comprises the warped mesh elements.
  • a system for constraining registration of a pre-operative image of an anatomical area of interest to an intraoperative image of the anatomical area of interest comprises a modeling process, a registration processor and a display.
  • the modeling processor is configured to perform an iterative optimization process a plurality of times.
  • the iterative optimization process includes generating a biomechanical displacement field by applying a biomechanical model to the pre-operative image using a plurality of model parameters, applying the biomechanical displacement field to the pre-operative image to generate a biomechanically deformed pre-operative image, comparing the biomechanically deformed pre-operative image with the intra-operative image to yield a similarity measurement value, and using the similarity measurement value to update the plurality of model parameters for a subsequent iteration.
  • the registration processor is configured to perform a diffeomorphic registration of the biomechanically deformed pre-operative image to the intra-operative image.
  • the display is configured to present the biomechanically deformed pre-operative image overlaying the intra-operative image.
  • the aforementioned system also includes a rotational angiography system configured to generate the intra-operative image.
  • the system also includes an insufflation device configured to provide a pressure measurement value to the modeling processor that may be used as a model parameter.
  • FIG. 1 shows a computer-assisted surgical system, used in some embodiments of the present invention
  • FIG. 2 provides a high-level overview of a biomechanically driven registration framework for registering an intra-operative image to a pre-operative image, according to some embodiments of the present invention
  • FIG. 3 presents an overview of a biomechanically driven registration framework for registering an intra-operative image to a pre-operative where the model parameters can be updated in an iterative process, according to some embodiments of the present invention
  • FIG. 4 shows two images illustrating the effect of the deformation on the abdominal wall, liver, and surrounding tissues;
  • FIG. 5 illustrates a process in which a biomechanical model is used to constrain registration of a pre-operative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest, according to some embodiments of the present invention
  • FIG. 6 provides pseudocode for implementing a non-rigid image registration with diffeomorphic and insufflation model constraints, as may be used in some embodiments of the present invention.
  • FIG. 7 illustrates an exemplary computing environment within which embodiments of the invention may be implemented.
  • the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for registering pre-operative images to intra-operative images using a biomechanically driven registration framework.
  • this framework uses both intensity and gas insufflation (or liquid injection) information in the transformation model to estimate deformations to the anatomical area of interest.
  • the various methods, systems, and apparatuses described herein are especially applicable to, but not limited to, minimally invasive surgical techniques.
  • FIG. 1 shows a computer-assisted surgical system 100, used in some embodiments of the present invention.
  • the system 100 includes components which may be categorized generally as being associated with a pre-operative site 105 or an intra-operative site 1 10.
  • the various components located at each site 105, 110 may be operably connected with a network 1 15.
  • the components may be located at different areas of a facility, or even at different facilities.
  • the pre-operative site 105 and the intra-operative site 1 10 are co-located.
  • the network 1 15 may be absent and the components may be directly connected.
  • a small scale network e.g., a local area network
  • an imaging system 105 A is used to gather planning data.
  • the imaging system 105 A gathers images using any of a variety of imaging modalities including, for example, tomographic modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), single-photon emission computed tomography (SPECT), and positron emission tomography (PET).
  • CT computed tomography
  • MRI magnetic resonance imaging
  • SPECT single-photon emission computed tomography
  • PET positron emission tomography
  • the gathered pre-operative images are referred to generally herein as the pre-operative planning data.
  • this preoperative planning data is generated by the imaging system 105 A, it is transferred (e.g. via network 1 15) to a database 11 OA at the intra-operative site 110.
  • the intra-operative site 1 10 includes various components 1 10A, HOB, HOC,
  • FIG. 1 only illustrates a single imaging computer 1 10F, in other embodiments, multiple imaging computers may be used. Collectively, the one or more imaging computers provide functionality for viewing, manipulating, communicating and storing medical images on computer readable media. Example implementations of computers that may be used as the imaging computer 1 10F are described below with reference to FIG. 7.
  • imaging device HOB is a rotational angiography system which includes a fixed C-Arm that rotates around the patient to acquire a series of rotational projection images of an anatomical area of interest that are subsequently reconstructed into two-dimensional images. These intra-operative images may subsequently be used to generate a three-dimensional model of the area of interest. Collectively, the reconstructed two-dimensional images and three-dimensional model are referred to herein as intra-operative planning data. In some embodiments, image reconstruction and/or three- dimensional model generation occurs within the imaging device 1 10B itself, using dedicated computer hardware (not shown in FIG 1). Then, the intra-operative planning data is transferred to the imaging computer 11 OF for use during surgery.
  • captured rotational projection images are transferred to the imaging computer 1 10F for reconstruction into two-dimensional images and generation of the three-dimensional model.
  • imaging device HOB is a rotational angiography system in the embodiment illustrated in FIG. 1, in other embodiments, different types of imaging devices including CT, MRI, and structure light devices.
  • the imaging computer 1 10F retrieves the intra-operative planning data from the database 1 10A for presentation on a display 110E to help guide the surgical team performing the operation.
  • the imaging computer 11 OF may immediately present the received intraoperative planning data upon receipt from the imaging device 1 10B. Additional information may also be overlaid on the display 1 10E.
  • the display is configured to present a biomechanically deformed pre-operative image overlaying the intraoperative image.
  • multiple displays may be used, for example, to display different perspectives of the anatomical area of interest (e.g., based on pre-operative planning data and/or the intra-operative planning data), indications of sensitive tissue areas, or messages indication that a new intra-operative scan should be performed to update the intra-operative planning data.
  • the laparoscope 110D is a medical instrument through which structures within the abdomen and pelvis can be seen during surgery. Typically a small incision is made a patient's abdominal wall allowing the laparoscope to be inserted.
  • laparoscopes including, for example, telescopic rod lens systems (usually connected to a video camera) and digital systems where a miniature digital video camera is at the end of the laparoscope.
  • laparoscopes may be configured to capture stereo images using either a two-lens optical system or a single optical channel.
  • the tracking system HOC provides tracking data to the imaging computer 110F for use in registration of the intra-operative planning data (received from device 1 10B) with data gathered by laparoscope 1 10D.
  • an optical tracking system 1 IOC is depicted.
  • other techniques may be used for tracking including, without limitation, electromagnetic (EM) tracking and/or robotic encoders.
  • the system further includes a gas insufflation device (not shown in FIG. 1) may be used to expand the anatomical area of interest (e.g., abdomen) to provide additional workroom or reduce obstruction during surgery.
  • This insufflation device may be configured to provide a pressure measurement values to the imaging computer 1 10E for display or for use in other applications such as the modeling techniques described herein.
  • devices such liquid injection systems may be used to create and measure pressure during surgery as an alternative to the aforementioned gas insufflation device.
  • the framework includes two main steps: registration constrained by a gas insufflation model 210 followed by a diffeomorphic non-rigid refinement 215 which, in combination, result in an aligned pre-operative image 220.
  • the gas insufflation model constrained registration step 210 computes the deformations and organ shifts caused by gas pressure, using a biomechanical model which is based on the mechanical parameters and pressure level.
  • This model is applied to the pre-operative image 205B to achieve an initial alignment with the intra-operative image 205B, which accounts for both non-rigid and rigid transformations caused by the insufflation.
  • other fluid pressure values may be used in the constrained registration step 210 rather than those associated with gas insufflation.
  • pressure values associated with liquid injection are used as an alternative to gas pressure values.
  • the biomechanical model is incorporated into the registration framework 200 by coupling the model parameters with an intensity similarity measure. In the diffeomorphic registration section 215 of the framework 200, the surface differences between the pre-operative image 205B (warped according to the biomechanical model) and the intra-operative image 205A are refined.
  • the image voxels of the pre-operative image may be represented with tetrahedral mesh objects generated after image segmentation.
  • this segmentation may be manually performed by a clinician, while in other embodiments semi-automatic or automatic segmentation techniques may be employed.
  • the results of the segmentation will be based on the anatomical area of interest. For example, for laparoscopic surgical applications, the pre-operative image may be segmented as liver, abdominal wall, and surrounding tissues. The deformation is then estimated using the mesh elements corresponding to these organs.
  • M, C, and K are the matrices representing the mechanical properties of the tissues, including the object mass, damping factor, and object stiffness, respectively.
  • the input images are I m (the pre-operative or "moving" image) and I r (the intra-operative or "reference” image).
  • the voxel positions are denoted by u and the image similarity gradient forces are represented by /
  • the partial differential equation (1) is solved for u iteratively to obtain the similarity maximizing voxel displacement fields.
  • This mechanical model may be modified to adapt it for the laparoscopic surgery.
  • the gas insufflation effect may be included in the optimization using the same mechanical constraints (left-hand side of the equation).
  • the general form of the modified mechanical equilibrium equations may be formulated as
  • Mil + Cii + Ku a ⁇ f u, l m , I r + ff (u) (2)
  • f (u is the fluid (e.g., gas or liquid) pressure force field. This external force is uniformly distributed on the tissue surfaces and may be applied at each optimization iteration.
  • the coupling between the standard FEM and gas/liquid forces is achieved using the parameter a. Initially, the value of a is set to be less than one and, as algorithm converges, its value is increased to fine tune the deformations using the intensity information.
  • the tissue model can be brought to a higher degree of realism by introducing dynamic and functional parameters rather than using static mechanical properties.
  • the blood vessels in liver usually experience deformation due to resection during surgery. In turn, this leads to bleeding and tissue stiffness variations due to rapid delivery of blood, which can be modeled with hemodynamics or other dynamic parameters.
  • the dynamic tissue stiffness and response monitoring can provide useful information to surgeon during the procedure. For example, the surgeon can get a feedback on sensitive tissue locations (e.g., presented on a display) and perform the surgery with laparoscopic instruments accordingly. This information can also indicate the time to perform a new intra-operative scan to observe the anatomical changes and critical tissue locations.
  • FIG. 3 provides an overview of a biomechanically driven registration framework
  • a Pre-Operative Image Segmentation module 305 segments the image voxels of pre-operative image into one or more elements. For example, in one embodiment, these elements include an abdominal wall element, a liver element, and an element representing surrounding tissues. Then, a Tetrahedral Mesh Generation module 310 generates tetrahedral mesh objects for each element. These tetrahedral mesh objects are then used as input into simulation comprising a Pneumoperitoneum Generation module 315, a Finite Element Solver module 320, and a Mesh-to-Image Reconstruction module 325. This simulation may be developed using any technique known in the art. For example, in one embodiment, the Simulation Open Framework Architecture (SOFA) framework is used to implement the framework.
  • SOFA Simulation Open Framework Architecture
  • FIG. 3 may be configured in a variety of ways to implement the techniques described herein.
  • a co-rotational finite elements method is used for modeling and displacement fields are computed with Euler implicit solver.
  • a mechanical system with abdominal wall, liver, and the surrounding tissues is stimulated with the external force field generated in the abdominal cavity, which stands for the insufflated gas in this example.
  • the applied force field together with the internal spring and bending forces, deforms the mesh element while preserving the mesh topology.
  • FIG. 4 provides two images illustrating the effect of the deformation on the abdominal wall, liver, and surrounding tissues.
  • Image 400 shows the initial mesh before the insufflation, while image 405 shows the same region after insufflation.
  • the force, acceleration, and displacement fields for each node element are integrated and computed in an iterative approach.
  • the model parameters may include, for example, the gas (or liquid) pressure, mesh stiffness, and Young's modulus for the mesh elements.
  • the initial parameter values may be set, for example using the experimentally obtained values, and the initial pressure level may be collected during surgery.
  • the simulation procedure is combined with the intensity similarity computation and parameter update. Specifically, for each iteration, deformed mesh elements are transformed back to the image volume. This may be achieved, for example, by using the node displacements together with the ThinPlate Spline interpolation and back-mapping algorithms.
  • a normalized mutual information (NMI) Cost Function module 335 compares an intra-operative image 330 to the pre-operative image warped by the simulation to determine a cost (i.e., a measure of the similarity of the two images) and update one or more parameters for the simulation.
  • the simulation receives the updated parameters, it computes a new warped pre-operative image which is used as input to the NMI Cost Function module 335. This process continues until the cost reaches a predetermined threshold value at which point a Diffeomorphic Non-Rigid Registration module 340 registers the final warped pre-operative image with the intraoperative image.
  • FIG. 5 illustrates a process 500 in which a biomechanical model is used to constrain registration of a pre-operative image of an anatomical area of interest to an intraoperative image of the anatomical area of interest, according to some embodiments of the present invention.
  • process 500 shown in FIG. 5 also updates the model parameters via an iterative process.
  • the method begins at 505 as pre-operative images are received, for example, from a previously generated MRI or CT scan.
  • the pre-operative image is segmented a mesh elements are identified.
  • an intraoperative image is received, for example, from a rotational angiography scan.
  • a rigid registration process (not shown in FIG.
  • model parameters are initialized.
  • the model parameters include biomechanical, as well as dynamic parameters (e.g., hemodynamic values).
  • Some of the initial parameter values may be set, for example using the experimentally obtained values, which other parameter values (e.g., an initial gas or liquid pressure level) may be collected during surgery.
  • a pressure measurement is received during surgery from an insufflation or injection device. This pressure measurement may then be used either directly as a parameter value or it may be used to derive a model parameter.
  • the initial parameters are used as input into an iterative process comprising steps 525, 530, 535, and 540.
  • This iterative process is executed a plurality of times to apply a biomechanical displacement field to the pre-operative image.
  • the process is repeated until a gradient of intensity similarity between iterations is smaller than a pre-determined threshold value.
  • a biomechanical displacement field is generated by applying the biomechanical model to the pre-operative image using the model parameters.
  • the biomechanical displacement field represents a displacement of a portion of an anatomical area of interest.
  • the portion of the anatomical area of interest corresponds to an abdominal wall region and the biomechanical displacement field is corresponds to displacement of a portion of the anatomical area of interest due to gas insufflation or liquid injection.
  • this displacement field is generated based on the mesh elements identified in the segmented pre-operative image. For example, a plurality of surface points on each of the mesh elements may be identified. Then, the biomechanical model can apply an insufflated gas pressure value to the surface points to generate the biomechanical displacement field. In embodiments where liquid injection is used, a liquid injection pressure value may be applied as an alternative to the gas pressure value.
  • the biomechanical displacement field is applied to the pre-operative image to generate a biomechanically deformed pre-operative image.
  • the biomechanically deformed pre-operative image is compared with the intra-operative image to yield a similarity measurement value. Any technique known in the art may be used to perform the image comparison and generate the similarity measurement value.
  • the similarity measurement value is used to update the plurality of model parameters for a subsequent iteration.
  • a diffeomorphic registration of the biomechanically deformed pre-operative image to the intraoperative image is performed.
  • the biomechanically deformed pre-operative image is presented on a display overlaying the intra-operative image on a display.
  • additional information resulting from the process 500 may be used to enhance surgical applications. For example, in one embodiment, a sensitive tissue location is determine in the anatomical area of interest based on the one or more dynamic parameter values used in the process 500. This tissue location may be presented on a display with an indication (e.g., color) that highlights its sensitivity.
  • the surgeon may be presented with a visual and/or audible warning when approaching the sensitive location with a surgical device.
  • the dynamic parameters are used to determine a time for performing a new intra-operative scan. As this time approaches, a visual and/or audible indicator maybe presented to alert the surgical team that a new intra-operative scan should be performed.
  • the time may be used to automatically perform the scan. It should be noted that this time may be derived far in advance to when this scan is needed. Thus, any automatic or manual preparation of the device providing the intra-operative scan may be done while surgery is being performed, allowing for minimal time to be lost transitioning between surgery and intra-operative scanning.
  • FIG. 6 provides pseudocode 600 for implementing a non-rigid image registration with diffeomorphic and insufflation model constraints, as may be used in some embodiments of the present invention.
  • the pseudocode in FIG. 6 is based on an insufflation model, it should be understood that a fluid injection model or other models may be substituted in the pseudocode based on the clinical application.
  • the functions DF(.) and BM(.) perform the diffeomorphic and biomechanical gas insufflation model regularization on the displacement fields, respectively.
  • the mechanical parameter set is denoted by ⁇ , and each element in the set is updated at the end of iteration t.
  • the gradient of intensity similarity measure S is used in determining the parameter update direction, and a determines its step size.
  • the optimization procedure stops when the change in 5 * is smaller than some constant ⁇ .
  • the optimization loop returns the warped pre-operative image and the displacement fields Udf and
  • FIG. 7 illustrates an exemplary computing environment 700 within which embodiments of the invention may be implemented.
  • This environment 700 may be used, for example, to implement a portion of one or more components used at the pre-operative site 107 or the intra-operative site 110 illustrated in FIG. 1.
  • Computing environment 700 may include computer system 710, which is one example of a computing system upon which embodiments of the invention may be implemented.
  • Computers and computing environments, such as computer system 710 and computing environment 700, are known to those of skill in the art and thus are described briefly here.
  • the computer system 710 may include a communication mechanism such as a bus 721 or other communication mechanism for communicating information within the computer system 710.
  • the system 710 further includes one or more processors 720 coupled with the bus 721 for processing the information.
  • the processors 720 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine- readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device.
  • CPUs central processing units
  • GPUs graphical processing units
  • a processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer.
  • a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between.
  • a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof.
  • a user interface comprises one or more display images enabling user interaction with a processor or other device.
  • a system for using a biomechanical model to constrain image registration includes a segmentation processor configured to perform segmentation tasks; a modeling processor configured to perform modeling tasks (e.g., the iterative optimization process described herein); and a registration processor configured to diffeomorphic registration and/or other registration tasks.
  • the computer system 710 also includes a system memory 730 coupled to the bus 721 for storing information and instructions to be executed by processors 720.
  • the system memory 730 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 731 and/or random access memory (RAM) 732.
  • the system memory RAM 732 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
  • the system memory ROM 731 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
  • system memory 730 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 720.
  • a basic input/output system 733 (BIOS) containing the basic routines that help to transfer information between elements within computer system 710, such as during start-up, may be stored in ROM 731.
  • RAM 732 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 720.
  • System memory 730 may additionally include, for example, operating system 734, application programs 735, other program modules 736 and program data 737.
  • the computer system 710 also includes a disk controller 740 coupled to the bus
  • a magnetic hard disk 741 and a removable media drive 742 e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive.
  • the storage devices may be added to the computer system 710 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
  • SCSI small computer system interface
  • IDE integrated device electronics
  • USB Universal Serial Bus
  • FireWire FireWire
  • the computer system 710 may also include a display controller 765 coupled to the bus 721 to control a display or monitor 765, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user.
  • the computer system includes an input interface 760 and one or more input devices, such as a keyboard 762 and a pointing device 761, for interacting with a computer user and providing information to the processor 720.
  • the pointing device 761 for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processor 720 and for controlling cursor movement on the display 766.
  • the display 766 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 761.
  • the computer system 710 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 720 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 730.
  • a memory such as the system memory 730.
  • Such instructions may be read into the system memory 730 from another computer readable medium, such as a hard disk 741 or a removable media drive 742.
  • the hard disk 741 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security.
  • the processors 720 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 730.
  • hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • the computer system 710 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
  • the term "computer readable medium” as used herein refers to any medium that participates in providing instructions to the processor 720 for execution.
  • a computer readable medium may take many forms including, but not limited to, non-transitory, nonvolatile media, volatile media, and transmission media.
  • Non-limiting examples of nonvolatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 741 or removable media drive 742.
  • Non-limiting examples of volatile media include dynamic memory, such as system memory 730.
  • Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 721. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • the computing environment 700 may further include the computer system 710 operating in a networked environment using logical connections to one or more remote computers, such as remote computer 780.
  • Remote computer 780 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 710.
  • computer system 710 may include modem 772 for establishing communications over a network 771, such as the Internet. Modem 772 may be connected to system bus 721 via user network interface 770, or via another appropriate mechanism.
  • Network 771 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 710 and other computers (e.g., remote computing system 780).
  • the network 771 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11, or any other wired connection generally known in the art.
  • Wireless connections may be implemented using Wi- Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art.
  • computers in computing environment 700 may include a hardware or software receiver module (not shown in FIG. 7) configured to receive one or data items used in performing the techniques described herein.
  • An executable application comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input.
  • An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
  • a graphical user interface comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions.
  • the GUI also includes an executable procedure or executable application.
  • the executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user.
  • the processor under control of an executable procedure or executable application, manipulates the UI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
  • the functions and process steps herein may be performed automatically or wholly or partially in response to user command.
  • An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity.
  • the embodiments of the present invention can be included in an article of manufacture comprising, for example, a non-transitory computer readable medium.
  • This computer readable medium may have embodied therein a method for facilitating one or more of the techniques utilized by some embodiments of the present invention.
  • the article of manufacture may be included as part of a computer system or sold separately.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A computer-implemented method of using a biomechanical model to constrain registration of a pre-operative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest begins by initializing a plurality of model parameters comprising biomechanical parameter values. Then, an iterative optimization process is performed a plurality of times. During this iterative process, a biomechanical displacement field is generated by applying the biomechanical model to the pre-operative image using the plurality of model parameters. Next, the biomechanical displacement field is applied to the pre-operative image to generate a biomechanically deformed pre-operative image. The biomechanically deformed pre-operative image is compared with the intra-operative image to yield a similarity measurement value. Using the similarity measurement value, the plurality of model parameters are updated for a subsequent iteration. Once the iterative optimization process is complete a diffeomorphic registration of the biomechanically deformed pre-operative image to the intra-operative image is performed. Then, the biomechanically deformed pre-operative image is present overlaying the intra-operative image on a display.

Description

BlOMECHANICALLY DRIVEN REGISTRATION OF PRE-OPERATIVE IMAGE TO INTRAOPERATIVE 3D IMAGES FOR LAPAROSCOPIC SURGERY
[1] This application claims priority to U.S. provisional application Serial No.
61/765,489 filed February 15, 2013 and U.S. provisional application Serial No. 61/865, 172 filed August 13, 2013, both of which are incorporated herein by reference in their entirety.
TECHNICAL FIELD
[2] The following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for registering pre-operative images to intra-operative images using a biomechanically driven registration framework. The technology is particularly well-suited to, but not limited to, minimally invasive surgical applications which utilize three-dimensional models generated intra-operatively, for example, using a rotational angiography system.
BACKGROUND
[3] In abdominal minimally invasive surgery such as liver resection, a laparoscopic camera is used to provide the surgeon with a visualization of the anatomical area of interest which may enhance the safety and accuracy of the procedure. For example, when removing a tumor, the surgeon's goal is to safely remove the tumor without damaging critical structures such as vessels.
[4] During surgery, the laparoscopic camera can only visualize the surface of the tissue. This makes localizing sub-surface structures, such as vessels and tumors, challenging. Therefore, intra-operative 3D images are introduced to provide updated information. While the intra-operative images typically have limited image information due to the constraints imposed in operating rooms, the pre-operative images can provide supplementary anatomical and functional details, and carry accurate segmentation of organs, vessels, and tumors. To bridge the gap between surgical plans and laparoscopic images, registration of pre- and intraoperative 3D images is needed. However, this registration is challenging due to liquid injection or gas insufflation, breathing motion, and other surgical preparation which results in large organ deformation and sliding between viscera and abdominal wall. Therefore, a standard non-rigid registration method cannot be directly applied and enhanced registration techniques which account for deformation and sliding are needed. SUMMARY
[5] Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks, by providing methods, systems, articles of manufacture, and apparatuses for registration of pre-operative images to intra-operative images driven by biomechanical modeling of abdomen deformation resulting from, for example, gas insufflation or liquid injection. In some embodiments, for example, a coupling between the registration and an insufflation model is achieved by optimizing the intensity similarity measure between the modeled pre-operative image and the intra-operative image. In this way, the critical structures and target anatomy may be accurately overlaid on the tissue and subsurface features may be visualized. The techniques discussed herein may be used, for example, to provide enhanced visualization to surgeons during minimally invasive surgical operations.
[6] According to some embodiments of the present invention, a computer- implemented method of using a biomechanical model to constrain registration of a preoperative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest includes initializing a plurality of model parameters comprising biomechanical parameter values. Next, in some embodiments, a rigid registration of the preoperative image to the intra-operative image may be performed. Then, an iterative optimization process is performed a plurality of times. In one embodiment, the iterative optimization process includes four stages. First, a biomechanical displacement field is generated by applying the biomechanical model to the pre-operative image using the model parameters. Second, the biomechanical displacement field is applied to the pre-operative image to generate a biomechanically deformed pre-operative image. Third, the biomechanically deformed pre-operative image is compared with the intra-operative image to yield a similarity measurement value. Finally, the similarity measurement value is used to update the model parameters for a subsequent iteration. Following the iterative optimization process, a diffeomorphic registration of the biomechanically deformed pre-operative image to the intra-operative image is performed. Then, the biomechanically deformed pre-operative image is presented overlaying the intra-operative image on a display.
[7] The biomechanical displacement field used in the aforementioned method may correspond to displacement from various fluids including, for example, liquids, gases, or a combination of liquids and gases. For example, in one embodiment, the biomechanical displacement field corresponds to displacement of a portion of the anatomical area of interest (e.g., an abdominal wall region) due to gas insufflation or liquid injection. A pressure value associated with the fluid may be used to generate the biomechanical displacement field. For example, a plurality of mesh elements may be identified based on the pre-operative image, then a plurality of surface points may be identified on each of the mesh elements. Next, in one embodiment, the biomechanical model applies an insufflated gas pressure value to the surface points to generate the biomechanical displacement field. In another embodiment, the biomechanical model applies a liquid injection value to the surface points to generate the biomechanical displacement field.
[8] In some embodiments, the model parameters used in the aforementioned method further comprises one or more dynamic parameter values. For example, in one embodiment, the method further comprises receiving a pressure measurement value from an insufflation device and the dynamic parameter values further include the pressure measurement value. In one embodiment, the dynamic parameter values comprise one or more hemodynamic parameter values. The dynamic parameter values may be utilized in additional ways. For example, in one embodiment, the method further includes determining a sensitive tissue location in the anatomical area of interest based on the dynamic parameter values and presenting the sensitive tissue location on the display. In one embodiment, the method includes determining a time to perform a new intra-operative scan based on the dynamic parameter values and performing the new intra-operative scan at the time to acquire an updated intra-operative image.
[9] According to other embodiments of the present invention, a computer- implemented method of using a biomechanical model to constrain registration of a preoperative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest includes initializing a plurality of model parameters comprising mechanical parameter values. In some embodiments, these model parameters include an insufflated gas pressure value or a liquid pressure value. In one embodiment, the model parameters further comprise one or more dynamic parameter values such as, for example, one or more hemodynamic parameter values. A rigid registration of the pre-operative image to the intra-operative image is performed. Then, a biomechanically deformed pre-operative image is generated by applying the biomechanical model to the pre-operative image. A statistical gradient of intensity similarity value is computed based on the biomechanically deformed pre-operative image and the intra-operative image. This statistical gradient of intensity similarity value is then used to update the plurality of model parameters. Next, a registered pre-operative image is generated by applying a diffeomorphic model to the biomechanically deformed pre-operative image. Then, the registered pre-operative image is presented overlaying the intra-operative image on a display.
[10] In some embodiments, the aforementioned method further includes segmenting the pre-operative image into a plurality of anatomical regions and generating a plurality of mesh elements, each mesh element associated with one of the anatomical regions. In one embodiment, the biomechanically deformed pre-operative image is generated by applying the biomechanical model to the pre-operative image according to two steps. First, the biomechanical model is used to generate a biomechanical gas insufflation displacement field based on the plurality of mesh elements and the plurality of model parameters. Then, the biomechanical gas insufflation displacement field is applied to the plurality of mesh elements to yield a plurality of deformed mesh elements. Alternatively, if liquid injection is used the biomechanical model to may be used generate a biomechanical liquid injection displacement field based on the mesh elements and the model parameters. Then, the biomechanical liquid injection displacement field may be applied to the mesh elements to yield the deformed mesh elements.
[11] In one embodiment, the registered pre-operative image is generated in the aforementioned method in two steps. First, the diffeomorphic model is used to generate a diffeomorphic displacement field based on the deformed mesh elements and the intraoperative image. Next, the diffeomorphic displacement field is applied to the deformed mesh elements to yield a plurality of warped mesh elements. The registered pre-operative image then comprises the warped mesh elements.
[12] According to other embodiments of the present invention, a system for constraining registration of a pre-operative image of an anatomical area of interest to an intraoperative image of the anatomical area of interest comprises a modeling process, a registration processor and a display. The modeling processor is configured to perform an iterative optimization process a plurality of times. In one embodiment, the iterative optimization process includes generating a biomechanical displacement field by applying a biomechanical model to the pre-operative image using a plurality of model parameters, applying the biomechanical displacement field to the pre-operative image to generate a biomechanically deformed pre-operative image, comparing the biomechanically deformed pre-operative image with the intra-operative image to yield a similarity measurement value, and using the similarity measurement value to update the plurality of model parameters for a subsequent iteration. The registration processor is configured to perform a diffeomorphic registration of the biomechanically deformed pre-operative image to the intra-operative image. The display is configured to present the biomechanically deformed pre-operative image overlaying the intra-operative image. In one embodiment, the aforementioned system also includes a rotational angiography system configured to generate the intra-operative image. In one embodiment, the system also includes an insufflation device configured to provide a pressure measurement value to the modeling processor that may be used as a model parameter.
[13] Additional features and advantages of the invention will be made apparent from the following detailed description of illustrative embodiments that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[14] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:
[15] FIG. 1 shows a computer-assisted surgical system, used in some embodiments of the present invention;
[16] FIG. 2 provides a high-level overview of a biomechanically driven registration framework for registering an intra-operative image to a pre-operative image, according to some embodiments of the present invention;
[17] FIG. 3 presents an overview of a biomechanically driven registration framework for registering an intra-operative image to a pre-operative where the model parameters can be updated in an iterative process, according to some embodiments of the present invention;
[18] FIG. 4 shows two images illustrating the effect of the deformation on the abdominal wall, liver, and surrounding tissues; [19] FIG. 5 illustrates a process in which a biomechanical model is used to constrain registration of a pre-operative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest, according to some embodiments of the present invention;
[20] FIG. 6 provides pseudocode for implementing a non-rigid image registration with diffeomorphic and insufflation model constraints, as may be used in some embodiments of the present invention; and
[21] FIG. 7 illustrates an exemplary computing environment within which embodiments of the invention may be implemented.
DETAILED DESCRIPTION
[22] The following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for registering pre-operative images to intra-operative images using a biomechanically driven registration framework. In embodiments described herein, this framework uses both intensity and gas insufflation (or liquid injection) information in the transformation model to estimate deformations to the anatomical area of interest. The various methods, systems, and apparatuses described herein are especially applicable to, but not limited to, minimally invasive surgical techniques.
[23] FIG. 1 shows a computer-assisted surgical system 100, used in some embodiments of the present invention. The system 100 includes components which may be categorized generally as being associated with a pre-operative site 105 or an intra-operative site 1 10. In the example of FIG. 1, the various components located at each site 105, 110 may be operably connected with a network 1 15. Thus, the components may be located at different areas of a facility, or even at different facilities. However, it should be noted that, in some embodiments the pre-operative site 105 and the intra-operative site 1 10 are co-located. In these embodiments, the network 1 15 may be absent and the components may be directly connected. Alternatively, a small scale network (e.g., a local area network) may be used.
[24] At the pre-operative site 105, an imaging system 105 A is used to gather planning data. In one embodiment, the imaging system 105 A gathers images using any of a variety of imaging modalities including, for example, tomographic modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), single-photon emission computed tomography (SPECT), and positron emission tomography (PET). The gathered pre-operative images are referred to generally herein as the pre-operative planning data. Once this preoperative planning data is generated by the imaging system 105 A, it is transferred (e.g. via network 1 15) to a database 11 OA at the intra-operative site 110.
[25] The intra-operative site 1 10 includes various components 1 10A, HOB, HOC,
HOD, and 110E operably coupled to an imaging computer 110F. Although FIG. 1 only illustrates a single imaging computer 1 10F, in other embodiments, multiple imaging computers may be used. Collectively, the one or more imaging computers provide functionality for viewing, manipulating, communicating and storing medical images on computer readable media. Example implementations of computers that may be used as the imaging computer 1 10F are described below with reference to FIG. 7.
[26] In the example of FIG. 1, imaging device HOB is a rotational angiography system which includes a fixed C-Arm that rotates around the patient to acquire a series of rotational projection images of an anatomical area of interest that are subsequently reconstructed into two-dimensional images. These intra-operative images may subsequently be used to generate a three-dimensional model of the area of interest. Collectively, the reconstructed two-dimensional images and three-dimensional model are referred to herein as intra-operative planning data. In some embodiments, image reconstruction and/or three- dimensional model generation occurs within the imaging device 1 10B itself, using dedicated computer hardware (not shown in FIG 1). Then, the intra-operative planning data is transferred to the imaging computer 11 OF for use during surgery. In other embodiments, captured rotational projection images are transferred to the imaging computer 1 10F for reconstruction into two-dimensional images and generation of the three-dimensional model. It should be noted that, although imaging device HOB is a rotational angiography system in the embodiment illustrated in FIG. 1, in other embodiments, different types of imaging devices including CT, MRI, and structure light devices.
[27] Once the intra-operative planning data are received by the imaging computer
HOB, it may be stored in database 11 OA for later use. Then, when needed, the imaging computer 1 10F retrieves the intra-operative planning data from the database 1 10A for presentation on a display 110E to help guide the surgical team performing the operation. Alternatively, the imaging computer 11 OF may immediately present the received intraoperative planning data upon receipt from the imaging device 1 10B. Additional information may also be overlaid on the display 1 10E. For example, in one embodiment, the display is configured to present a biomechanically deformed pre-operative image overlaying the intraoperative image. Although a single display 110E is shown in the embodiment illustrated in FIG. 1, in other embodiments multiple displays may be used, for example, to display different perspectives of the anatomical area of interest (e.g., based on pre-operative planning data and/or the intra-operative planning data), indications of sensitive tissue areas, or messages indication that a new intra-operative scan should be performed to update the intra-operative planning data.
[28] During surgery, the surgical team utilizes the information presented on the display 110E, along with a laparoscope HOD and tracking system HOC. The laparoscope 110D is a medical instrument through which structures within the abdomen and pelvis can be seen during surgery. Typically a small incision is made a patient's abdominal wall allowing the laparoscope to be inserted. There are various types of laparoscopes including, for example, telescopic rod lens systems (usually connected to a video camera) and digital systems where a miniature digital video camera is at the end of the laparoscope. To mimic three-dimensional vision in humans, laparoscopes may be configured to capture stereo images using either a two-lens optical system or a single optical channel. Such laparoscopes are referred to herein as "stereo laparoscopes." The tracking system HOC provides tracking data to the imaging computer 110F for use in registration of the intra-operative planning data (received from device 1 10B) with data gathered by laparoscope 1 10D. In the example of FIG. 1, an optical tracking system 1 IOC is depicted. However, other techniques may be used for tracking including, without limitation, electromagnetic (EM) tracking and/or robotic encoders.
[29] Depending on the surgical setting, additional devices may be included in the system 100 depicted in FIG. 1. For example, in some embodiments, the system further includes a gas insufflation device (not shown in FIG. 1) may be used to expand the anatomical area of interest (e.g., abdomen) to provide additional workroom or reduce obstruction during surgery. This insufflation device may be configured to provide a pressure measurement values to the imaging computer 1 10E for display or for use in other applications such as the modeling techniques described herein. In other embodiments, devices such liquid injection systems may be used to create and measure pressure during surgery as an alternative to the aforementioned gas insufflation device. [30] FIG. 2 provides a high-level overview of a biomechanically driven registration framework 200 for registering an intra-operative ("reference") image 205A to a pre-operative ("moving") image 205B, according to some embodiments of the present invention. The framework includes two main steps: registration constrained by a gas insufflation model 210 followed by a diffeomorphic non-rigid refinement 215 which, in combination, result in an aligned pre-operative image 220. The gas insufflation model constrained registration step 210 computes the deformations and organ shifts caused by gas pressure, using a biomechanical model which is based on the mechanical parameters and pressure level. This model is applied to the pre-operative image 205B to achieve an initial alignment with the intra-operative image 205B, which accounts for both non-rigid and rigid transformations caused by the insufflation. It should be noted that other fluid pressure values may be used in the constrained registration step 210 rather than those associated with gas insufflation. For example, in some embodiments, pressure values associated with liquid injection are used as an alternative to gas pressure values. The biomechanical model is incorporated into the registration framework 200 by coupling the model parameters with an intensity similarity measure. In the diffeomorphic registration section 215 of the framework 200, the surface differences between the pre-operative image 205B (warped according to the biomechanical model) and the intra-operative image 205A are refined.
[31] In some embodiments, an approach similar to finite element method (FEM) registration is used. In this approach, the image voxels of the pre-operative image may be represented with tetrahedral mesh objects generated after image segmentation. In some embodiments, this segmentation may be manually performed by a clinician, while in other embodiments semi-automatic or automatic segmentation techniques may be employed. The results of the segmentation will be based on the anatomical area of interest. For example, for laparoscopic surgical applications, the pre-operative image may be segmented as liver, abdominal wall, and surrounding tissues. The deformation is then estimated using the mesh elements corresponding to these organs. Mechanical properties of the mesh elements (tissue models) are determined, for example, with Poisson's Ratio and Young's Modulus parameters measured experimentally. In some embodiments, the mechanical model-based image registration may be implemented according to following image similarity gradient expression to compute the displacements: Mil + Cii + Ku = f u, Im, Ir (1) where M, C, and K are the matrices representing the mechanical properties of the tissues, including the object mass, damping factor, and object stiffness, respectively. The input images are Im (the pre-operative or "moving" image) and Ir (the intra-operative or "reference" image). The voxel positions are denoted by u and the image similarity gradient forces are represented by / The partial differential equation (1) is solved for u iteratively to obtain the similarity maximizing voxel displacement fields. This mechanical model may be modified to adapt it for the laparoscopic surgery. For example, the gas insufflation effect may be included in the optimization using the same mechanical constraints (left-hand side of the equation). The general form of the modified mechanical equilibrium equations may be formulated as
Mil + Cii + Ku = a f u, lm, Ir + ff (u) (2) where f (u is the fluid (e.g., gas or liquid) pressure force field. This external force is uniformly distributed on the tissue surfaces and may be applied at each optimization iteration. The coupling between the standard FEM and gas/liquid forces is achieved using the parameter a. Initially, the value of a is set to be less than one and, as algorithm converges, its value is increased to fine tune the deformations using the intensity information.
[32] In addition to the mechanical modeling with stiffness matrix, the tissue model can be brought to a higher degree of realism by introducing dynamic and functional parameters rather than using static mechanical properties. For example, the blood vessels in liver usually experience deformation due to resection during surgery. In turn, this leads to bleeding and tissue stiffness variations due to rapid delivery of blood, which can be modeled with hemodynamics or other dynamic parameters. Additionally, the dynamic tissue stiffness and response monitoring can provide useful information to surgeon during the procedure. For example, the surgeon can get a feedback on sensitive tissue locations (e.g., presented on a display) and perform the surgery with laparoscopic instruments accordingly. This information can also indicate the time to perform a new intra-operative scan to observe the anatomical changes and critical tissue locations.
[33] FIG. 3 provides an overview of a biomechanically driven registration framework
300 for registering an intra-operative image to a pre-operative where the model parameters can be updated in an iterative process, according to some embodiments of the present invention. A Pre-Operative Image Segmentation module 305 segments the image voxels of pre-operative image into one or more elements. For example, in one embodiment, these elements include an abdominal wall element, a liver element, and an element representing surrounding tissues. Then, a Tetrahedral Mesh Generation module 310 generates tetrahedral mesh objects for each element. These tetrahedral mesh objects are then used as input into simulation comprising a Pneumoperitoneum Generation module 315, a Finite Element Solver module 320, and a Mesh-to-Image Reconstruction module 325. This simulation may be developed using any technique known in the art. For example, in one embodiment, the Simulation Open Framework Architecture (SOFA) framework is used to implement the framework.
[34] The simulation presented in FIG. 3 may be configured in a variety of ways to implement the techniques described herein. For example, in one embodiment, a co-rotational finite elements method is used for modeling and displacement fields are computed with Euler implicit solver. A mechanical system with abdominal wall, liver, and the surrounding tissues is stimulated with the external force field generated in the abdominal cavity, which stands for the insufflated gas in this example. The applied force field, together with the internal spring and bending forces, deforms the mesh element while preserving the mesh topology. FIG. 4 provides two images illustrating the effect of the deformation on the abdominal wall, liver, and surrounding tissues. Image 400 shows the initial mesh before the insufflation, while image 405 shows the same region after insufflation. Returning to FIG. 3, the force, acceleration, and displacement fields for each node element are integrated and computed in an iterative approach. The model parameters may include, for example, the gas (or liquid) pressure, mesh stiffness, and Young's modulus for the mesh elements. The initial parameter values may be set, for example using the experimentally obtained values, and the initial pressure level may be collected during surgery.
[35] Continuing with reference to FIG. 3, the simulation procedure is combined with the intensity similarity computation and parameter update. Specifically, for each iteration, deformed mesh elements are transformed back to the image volume. This may be achieved, for example, by using the node displacements together with the ThinPlate Spline interpolation and back-mapping algorithms. In the example of FIG. 3, a normalized mutual information (NMI) Cost Function module 335 compares an intra-operative image 330 to the pre-operative image warped by the simulation to determine a cost (i.e., a measure of the similarity of the two images) and update one or more parameters for the simulation. Once the simulation receives the updated parameters, it computes a new warped pre-operative image which is used as input to the NMI Cost Function module 335. This process continues until the cost reaches a predetermined threshold value at which point a Diffeomorphic Non-Rigid Registration module 340 registers the final warped pre-operative image with the intraoperative image.
[36] FIG. 5 illustrates a process 500 in which a biomechanical model is used to constrain registration of a pre-operative image of an anatomical area of interest to an intraoperative image of the anatomical area of interest, according to some embodiments of the present invention. As with the example of FIG. 3, process 500 shown in FIG. 5 also updates the model parameters via an iterative process. The method begins at 505 as pre-operative images are received, for example, from a previously generated MRI or CT scan. At 510, the pre-operative image is segmented a mesh elements are identified. Then, at 515, an intraoperative image is received, for example, from a rotational angiography scan. Once the intraoperative image has been received, a rigid registration process (not shown in FIG. 5) may be applied to align the pre-operative image to the intra-operative image. At 520, model parameters are initialized. In some embodiments, the model parameters include biomechanical, as well as dynamic parameters (e.g., hemodynamic values). Some of the initial parameter values may be set, for example using the experimentally obtained values, which other parameter values (e.g., an initial gas or liquid pressure level) may be collected during surgery. For example, in some embodiments, a pressure measurement is received during surgery from an insufflation or injection device. This pressure measurement may then be used either directly as a parameter value or it may be used to derive a model parameter.
[37] Continuing with reference to FIG. 5, the initial parameters are used as input into an iterative process comprising steps 525, 530, 535, and 540. This iterative process is executed a plurality of times to apply a biomechanical displacement field to the pre-operative image. In some embodiments, such as the one illustrated in FIG. 5, the process is repeated until a gradient of intensity similarity between iterations is smaller than a pre-determined threshold value. At 525, a biomechanical displacement field is generated by applying the biomechanical model to the pre-operative image using the model parameters. The biomechanical displacement field represents a displacement of a portion of an anatomical area of interest. For example, in some embodiments, the portion of the anatomical area of interest corresponds to an abdominal wall region and the biomechanical displacement field is corresponds to displacement of a portion of the anatomical area of interest due to gas insufflation or liquid injection. In some embodiments, this displacement field is generated based on the mesh elements identified in the segmented pre-operative image. For example, a plurality of surface points on each of the mesh elements may be identified. Then, the biomechanical model can apply an insufflated gas pressure value to the surface points to generate the biomechanical displacement field. In embodiments where liquid injection is used, a liquid injection pressure value may be applied as an alternative to the gas pressure value.
[38] Next, at 530, the biomechanical displacement field is applied to the pre-operative image to generate a biomechanically deformed pre-operative image. At 535, the biomechanically deformed pre-operative image is compared with the intra-operative image to yield a similarity measurement value. Any technique known in the art may be used to perform the image comparison and generate the similarity measurement value. Then, at 540, the similarity measurement value is used to update the plurality of model parameters for a subsequent iteration.
[39] Once the process has been repeated the desired number of times (e.g., until the gradient of intensity similarity between iterations is smaller than the threshold), at 545 a diffeomorphic registration of the biomechanically deformed pre-operative image to the intraoperative image is performed. Then, at 550, the biomechanically deformed pre-operative image is presented on a display overlaying the intra-operative image on a display. Following the process illustrated in FIG. 5, additional information resulting from the process 500 may be used to enhance surgical applications. For example, in one embodiment, a sensitive tissue location is determine in the anatomical area of interest based on the one or more dynamic parameter values used in the process 500. This tissue location may be presented on a display with an indication (e.g., color) that highlights its sensitivity. Alternatively, the surgeon may be presented with a visual and/or audible warning when approaching the sensitive location with a surgical device. In some embodiments, the dynamic parameters are used to determine a time for performing a new intra-operative scan. As this time approaches, a visual and/or audible indicator maybe presented to alert the surgical team that a new intra-operative scan should be performed. Alternatively, the time may be used to automatically perform the scan. It should be noted that this time may be derived far in advance to when this scan is needed. Thus, any automatic or manual preparation of the device providing the intra-operative scan may be done while surgery is being performed, allowing for minimal time to be lost transitioning between surgery and intra-operative scanning.
[40] For further illustration of the techniques described herein, FIG. 6 provides pseudocode 600 for implementing a non-rigid image registration with diffeomorphic and insufflation model constraints, as may be used in some embodiments of the present invention. Although the pseudocode in FIG. 6 is based on an insufflation model, it should be understood that a fluid injection model or other models may be substituted in the pseudocode based on the clinical application. In the pseudocode, the functions DF(.) and BM(.) perform the diffeomorphic and biomechanical gas insufflation model regularization on the displacement fields, respectively. The mechanical parameter set is denoted by β, and each element in the set is updated at the end of iteration t. The gradient of intensity similarity measure S is used in determining the parameter update direction, and a determines its step size. The optimization procedure stops when the change in 5* is smaller than some constant δ. The optimization loop returns the warped pre-operative image and the displacement fields Udf and
[41] FIG. 7 illustrates an exemplary computing environment 700 within which embodiments of the invention may be implemented. This environment 700 may be used, for example, to implement a portion of one or more components used at the pre-operative site 107 or the intra-operative site 110 illustrated in FIG. 1. Computing environment 700 may include computer system 710, which is one example of a computing system upon which embodiments of the invention may be implemented. Computers and computing environments, such as computer system 710 and computing environment 700, are known to those of skill in the art and thus are described briefly here.
[42] As shown in FIG. 7, the computer system 710 may include a communication mechanism such as a bus 721 or other communication mechanism for communicating information within the computer system 710. The system 710 further includes one or more processors 720 coupled with the bus 721 for processing the information.
[43] The processors 720 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine- readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.
[44] The number and configuration of each processor may vary in different embodiments of the present invention. For example, in one embodiment, a system for using a biomechanical model to constrain image registration includes a segmentation processor configured to perform segmentation tasks; a modeling processor configured to perform modeling tasks (e.g., the iterative optimization process described herein); and a registration processor configured to diffeomorphic registration and/or other registration tasks.
[45] Continuing with reference to FIG. 7, the computer system 710 also includes a system memory 730 coupled to the bus 721 for storing information and instructions to be executed by processors 720. The system memory 730 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 731 and/or random access memory (RAM) 732. The system memory RAM 732 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The system memory ROM 731 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 730 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 720. A basic input/output system 733 (BIOS) containing the basic routines that help to transfer information between elements within computer system 710, such as during start-up, may be stored in ROM 731. RAM 732 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 720. System memory 730 may additionally include, for example, operating system 734, application programs 735, other program modules 736 and program data 737.
[46] The computer system 710 also includes a disk controller 740 coupled to the bus
721 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 741 and a removable media drive 742 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive). The storage devices may be added to the computer system 710 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
[47] The computer system 710 may also include a display controller 765 coupled to the bus 721 to control a display or monitor 765, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The computer system includes an input interface 760 and one or more input devices, such as a keyboard 762 and a pointing device 761, for interacting with a computer user and providing information to the processor 720. The pointing device 761, for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processor 720 and for controlling cursor movement on the display 766. The display 766 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 761.
[48] The computer system 710 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 720 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 730. Such instructions may be read into the system memory 730 from another computer readable medium, such as a hard disk 741 or a removable media drive 742. The hard disk 741 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security. The processors 720 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 730. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
[49] As stated above, the computer system 710 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term "computer readable medium" as used herein refers to any medium that participates in providing instructions to the processor 720 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, nonvolatile media, volatile media, and transmission media. Non-limiting examples of nonvolatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 741 or removable media drive 742. Non-limiting examples of volatile media include dynamic memory, such as system memory 730. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 721. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
[50] The computing environment 700 may further include the computer system 710 operating in a networked environment using logical connections to one or more remote computers, such as remote computer 780. Remote computer 780 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 710. When used in a networking environment, computer system 710 may include modem 772 for establishing communications over a network 771, such as the Internet. Modem 772 may be connected to system bus 721 via user network interface 770, or via another appropriate mechanism.
[51] Network 771 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 710 and other computers (e.g., remote computing system 780). The network 771 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi- Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 771. In some embodiments, computers in computing environment 700 may include a hardware or software receiver module (not shown in FIG. 7) configured to receive one or data items used in performing the techniques described herein.
[52] An executable application, as used herein, comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input. An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
[53] A graphical user interface (GUI), as used herein, comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions. The GUI also includes an executable procedure or executable application. The executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user. The processor, under control of an executable procedure or executable application, manipulates the UI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
[54] The functions and process steps herein may be performed automatically or wholly or partially in response to user command. An activity (including a step) performed automatically is performed in response to one or more executable instructions or device operation without user direct initiation of the activity. [55] The embodiments of the present invention can be included in an article of manufacture comprising, for example, a non-transitory computer readable medium. This computer readable medium may have embodied therein a method for facilitating one or more of the techniques utilized by some embodiments of the present invention. The article of manufacture may be included as part of a computer system or sold separately.
[56] The system and processes of the figures are not exclusive. Other systems, processes and menus may be derived in accordance with the principles of the invention to accomplish the same objectives. Although this invention has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the invention. As described herein, the various systems, subsystems, agents, managers and processes can be implemented using hardware components, software components, and/or combinations thereof. No claim element herein is to be construed under the provisions of 35 U.S.C. 112, sixth paragraph, unless the element is expressly recited using the phrase "means for."

Claims

CLAIMS We claim:
1. A computer-implemented method of using a biomechanical model to constrain registration of a pre-operative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest, the method comprising:
initializing a plurality of model parameters comprising biomechanical parameter values;
performing an iterative optimization process a plurality of times, the iterative optimization process comprising:
generating a biomechanical displacement field by applying the biomechanical model to the pre-operative image using the plurality of model parameters,
applying the biomechanical displacement field to the pre-operative image to generate a biomechanically deformed pre-operative image,
comparing the biomechanically deformed pre-operative image with the intraoperative image to yield a similarity measurement value, and
using the similarity measurement value to update the plurality of model parameters for a subsequent iteration;
performing a diffeomorphic registration of the biomechanically deformed preoperative image to the intra-operative image; and
presenting the biomechanically deformed pre-operative image overlaying the intraoperative image on a display.
2. The method of claim 1, wherein the biomechanical displacement field is corresponds to displacement of a portion of the anatomical area of interest due to gas insufflation and the portion of the anatomical area of interest corresponds to an abdominal wall region.
3. The method of claim 1, wherein the biomechanical displacement field is corresponds to displacement of a portion of the anatomical area of interest due to liquid inj ection and the portion of the anatomical area of interest corresponds to an abdominal wall region.
4. The method of claim 1 , further comprising:
performing a rigid registration of the pre-operative image to the intra-operative image prior to performing the iterative optimization process.
5. The method of claim 1, further comprising:
identifying a plurality of mesh elements based on the pre-operative image; and identifying a plurality of surface points on each of the plurality of mesh elements, wherein the biomechanical model applies an insufflated gas pressure value to the plurality of surface points to generate the biomechanical displacement field.
6. The method of claim 1 , further comprising:
identifying a plurality of mesh elements based on the pre-operative image; and identifying a plurality of surface points on each of the plurality of mesh elements, wherein the biomechanical model applies a liquid injection value to the plurality of surface points to generate the biomechanical displacement field.
7. The method of claim 1, wherein the plurality of model parameters further comprise one or more dynamic parameter values.
8. The method of claim 7, further comprising:
receiving a pressure measurement value from an insufflation device,
wherein the one or more dynamic parameter values further comprise the pressure measurement value.
9. The method of claim 7, wherein the one or more dynamic parameter values comprise one or more hemodynamic parameter values.
10. The method of claim 7, further comprising:
determining a sensitive tissue location in the anatomical area of interest based on the one or more dynamic parameter values; and
presenting the sensitive tissue location on the display.
11. The method of claim 7, further comprising:
determining a time to perform a new intra-operative scan based on the one or more dynamic parameter values; and
performing the new intra-operative scan at the time to acquire an updated intra- operative image.
12. A computer-implemented method of using a biomechanical model to constrain registration of a pre-operative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest, the method comprising:
initializing a plurality of model parameters comprising mechanical parameter values; performing a rigid registration of the pre-operative image to the intra-operative image; generating a biomechanically deformed pre-operative image by applying the biomechanical model to the pre-operative image,
computing a statistical gradient of intensity similarity value based on the
biomechanically deformed pre-operative image and the intra-operative image, and
updating the plurality of model parameters based on the statistical gradient of intensity similarity value;
generating a registered pre-operative image by applying a diffeomorphic model to the biomechanically deformed pre-operative image; and
presenting the registered pre-operative image overlaying the intra-operative image on a display.
13. The method of claim 12, wherein the plurality of model parameters comprise an insufflated gas pressure value or a liquid pressure value.
14. The method of claim 12, wherein the plurality of model parameters further comprise one or more dynamic parameter values.
15. The method of claim 14, wherein the one or more dynamic parameter values comprises a hemodynamic parameter value.
16. The method of claim 12, wherein further comprising:
segmenting the pre-operative image into a plurality of anatomical regions; and generating a plurality of mesh elements, each mesh element associated with one of the plurality of anatomical regions.
17. The method of claim 16 wherein generating the biomechanically deformed preoperative image by applying the biomechanical model to the pre-operative image comprises: using the biomechanical model to generate a biomechanical gas insufflation displacement field based on the plurality of mesh elements and the plurality of model parameters; and
applying the biomechanical gas insufflation displacement field to the plurality of mesh elements to yield a plurality of deformed mesh elements.
18. The method of claim 16 wherein generating the biomechanically deformed preoperative image by applying the biomechanical model to the pre-operative image comprises: using the biomechanical model to generate a biomechanical liquid injection displacement field based on the plurality of mesh elements and the plurality of model parameters; and
applying the biomechanical liquid injection displacement field to the plurality of mesh elements to yield a plurality of deformed mesh elements.
19. The method of claim 18, wherein generating the registered pre-operative image by applying the diffeomorphic model to the biomechanically deformed pre-operative image comprises:
using the diffeomorphic model to generate a diffeomorphic displacement field based on the plurality of deformed mesh elements and the intra-operative image; and
applying the diffeomorphic displacement field to the plurality of deformed mesh elements to yield a plurality of warped mesh elements,
wherein the registered pre-operative image comprises the plurality of warped mesh elements.
20. A system for constraining registration of a pre-operative image of an anatomical area of interest to an intra-operative image of the anatomical area of interest, the system comprising:
a modeling processor configured to perform an iterative optimization process a plurality of times, the iterative optimization process comprising:
generating a biomechanical displacement field by applying a biomechanical model to the pre-operative image using a plurality of model parameters,
applying the biomechanical displacement field to the pre-operative image to generate a biomechanically deformed pre-operative image,
comparing the biomechanically deformed pre-operative image with the intra- operative image to yield a similarity measurement value, and
using the similarity measurement value to update the plurality of model parameters for a subsequent iteration;
a registration processor configured to perform a diffeomorphic registration of the biomechanically deformed pre-operative image to the intra-operative image; and
a display configured to present the biomechanically deformed pre-operative image overlaying the intra-operative image.
21. The system of claim 20, further comprising a rotational angiography system configured to generate the intra-operative image.
22. The system of claim 20, further comprising:
an insufflation device configured to provide a pressure measurement value to the modeling processor,
wherein the plurality of model parameters further comprise the pressure measurement value.
PCT/US2014/016686 2013-02-15 2014-02-17 Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery WO2014127321A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361765489P 2013-02-15 2013-02-15
US61/765,489 2013-02-15
US201361865172P 2013-08-13 2013-08-13
US61/865,172 2013-08-13

Publications (2)

Publication Number Publication Date
WO2014127321A2 true WO2014127321A2 (en) 2014-08-21
WO2014127321A3 WO2014127321A3 (en) 2014-10-16

Family

ID=50277299

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/016686 WO2014127321A2 (en) 2013-02-15 2014-02-17 Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery

Country Status (1)

Country Link
WO (1) WO2014127321A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016028308A1 (en) * 2014-08-22 2016-02-25 Analogic Corporation Updating reference imaging data with update 2d and/or 3d imaging data
WO2016082017A1 (en) * 2014-11-27 2016-06-02 Synaptive Medical (Barbados) Inc. Method, system and apparatus for quantitative surgical image registration
WO2016182550A1 (en) 2015-05-11 2016-11-17 Siemens Aktiengesellschaft Method and system for registration of 2d/2.5d laparoscopic and endoscopic image data to 3d volumetric image data
WO2017180097A1 (en) * 2016-04-12 2017-10-19 Siemens Aktiengesellschaft Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation
US10885647B2 (en) 2016-05-02 2021-01-05 Katholieke Universiteit Leuven Estimation of electromechanical quantities by means of digital images and model-based filtering techniques

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016028308A1 (en) * 2014-08-22 2016-02-25 Analogic Corporation Updating reference imaging data with update 2d and/or 3d imaging data
US10650537B2 (en) 2014-08-22 2020-05-12 B-K Medical Aps Updating reference imaging data with update 2D and/or 3D imaging data
WO2016082017A1 (en) * 2014-11-27 2016-06-02 Synaptive Medical (Barbados) Inc. Method, system and apparatus for quantitative surgical image registration
GB2549023A (en) * 2014-11-27 2017-10-04 Synaptive Medical Barbados Inc Method, system and apparatus for quantitative surgical image registration
US9799114B2 (en) 2014-11-27 2017-10-24 Synaptive Medical (Barbados) Inc. Method, system and apparatus for quantitative surgical image registration
GB2549023B (en) * 2014-11-27 2020-06-17 Synaptive Medical Barbados Inc Method, system and apparatus for quantitative surgical image registration
WO2016182550A1 (en) 2015-05-11 2016-11-17 Siemens Aktiengesellschaft Method and system for registration of 2d/2.5d laparoscopic and endoscopic image data to 3d volumetric image data
WO2017180097A1 (en) * 2016-04-12 2017-10-19 Siemens Aktiengesellschaft Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation
US10885647B2 (en) 2016-05-02 2021-01-05 Katholieke Universiteit Leuven Estimation of electromechanical quantities by means of digital images and model-based filtering techniques

Also Published As

Publication number Publication date
WO2014127321A3 (en) 2014-10-16

Similar Documents

Publication Publication Date Title
US9129422B2 (en) Combined surface reconstruction and registration for laparoscopic surgery
KR102014355B1 (en) Method and apparatus for calculating location information of surgical device
US10776935B2 (en) System and method for correcting data for deformations during image-guided procedures
Plantefeve et al. Patient-specific biomechanical modeling for guidance during minimally-invasive hepatic surgery
US9761014B2 (en) System and method for registering pre-operative and intra-operative images using biomechanical model simulations
Haouchine et al. Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery
EP2637593B1 (en) Visualization of anatomical data by augmented reality
Haouchine et al. Impact of soft tissue heterogeneity on augmented reality for liver surgery
US20180189966A1 (en) System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation
US10026016B2 (en) Tracking and representation of multi-dimensional organs
CN104000655B (en) Surface reconstruction and registration for the combination of laparoscopically surgical operation
US20180150929A1 (en) Method and system for registration of 2d/2.5d laparoscopic and endoscopic image data to 3d volumetric image data
CN110458872B (en) System and method for performing biomechanically driven image registration using ultrasound elastography
Oktay et al. Biomechanically driven registration of pre-to intra-operative 3D images for laparoscopic surgery
WO2014127321A2 (en) Biomechanically driven registration of pre-operative image to intra-operative 3d images for laparoscopic surgery
AU2015203332B2 (en) Real-time generation of mri slices
Clements et al. Evaluation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound
WO2017180097A1 (en) Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation
JP7444569B2 (en) Arthroscopic surgery support device, arthroscopic surgery support method, and program
Zampokas et al. Real‐time stereo reconstruction of intraoperative scene and registration to preoperative 3D models for augmenting surgeons' view during RAMIS
US11393098B2 (en) Constrained object correction for a segmented image
Wang et al. Video-based Soft Tissue Deformation Tracking for Laparoscopic Augmented Reality-based Navigation in Kidney Surgery
Boussot et al. Statistical model for the prediction of lung deformation during video-assisted thoracoscopic surgery
Cotin et al. Augmented Reality for Computer-Guided Interventions
Lasowski et al. Adaptive visualization for needle guidance in RF liver ablation: taking organ deformation into account

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14710084

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 14710084

Country of ref document: EP

Kind code of ref document: A2