WO2018002347A1 - Registering tomographic imaging and endoscopic imaging - Google Patents

Registering tomographic imaging and endoscopic imaging Download PDF

Info

Publication number
WO2018002347A1
WO2018002347A1 PCT/EP2017/066360 EP2017066360W WO2018002347A1 WO 2018002347 A1 WO2018002347 A1 WO 2018002347A1 EP 2017066360 W EP2017066360 W EP 2017066360W WO 2018002347 A1 WO2018002347 A1 WO 2018002347A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging data
endoscopic
tomographic
sub
tomographic imaging
Prior art date
Application number
PCT/EP2017/066360
Other languages
French (fr)
Inventor
Bernardus Hendrikus Wilhelmus Hendriks
Thirukumaran Thangaraj KANAGASABAPATHI
Drazenko Babic
Michael Grass
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2018002347A1 publication Critical patent/WO2018002347A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/313Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
    • A61B1/3132Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes for laparoscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4417Constructional features of apparatus for radiation diagnosis related to combined acquisition of different diagnostic modalities
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the technical field generally relates to registering tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest.
  • the endoscopic imaging is laparoscopic imaging.
  • Laparoscopic surgery also called minimally invasive surgery (MIS), bandaid surgery, or keyhole surgery, is a surgical technique using a laparoscope.
  • the laparoscope generally includes a video camera and a light source to illuminate the operative field.
  • the abdomen is usually insufflated with, for example, carbon dioxide gas. This elevates the abdominal wall above the internal organs to create a working and viewing space.
  • Laparoscopic surgery may include operations within the abdominal or pelvic cavities.
  • a laparoscopic surgery may involve use of surgical instruments such as: forceps, scissors, probes, dissectors, hooks, retractors, etc.
  • Laparoscopic and thoracoscopic surgery belong to the broader field of endoscopy.
  • Exemplary laparoscopic procedures are removal of the gallbladder, removal of the appendix, removal of patches of endometriosis, removal of parts of the intestines, female sterilization, treating ectopic pregnancy, taking a biopsy of various structures inside the abdomen, which can be looked at under the microscope and/or tested in other ways, investigative procedures, etc.
  • pre-operative imaging e.g., computed tomography (CT) or magnetic resonance imaging (MRI)
  • CT computed tomography
  • MRI magnetic resonance imaging
  • the tissue in the region of interest may be deformed compared to the pre-operative model due to patient respiration, movement between the pre-operative and intraoperative settings, insufflation, manipulation of the anatomical structures in the region of interest, etc.
  • the laparoscopic camera generally provides a small field of view that limits the amount of information that may be used for registration of the pre-operative model.
  • registration is further complicated due to a lack of tissue surface texture and the lack of common landmarks across modalities.
  • US 2014/0241600 discloses a system for performing a combined surface reconstruction and registration of stereo laparoscopic images during a surgical operation.
  • the system includes a rotational angiography system, a receiver module, and an imaging computer.
  • the rotational angiography system is configured to generate an intraoperative three-dimensional model of an anatomical area of interest.
  • the receiver module is configured to receive a plurality of stereo endoscopic images of the anatomical area of interest from a laparoscope.
  • the imaging computer is configured to perform an iterative process a plurality of times until a registration error value is below a threshold value.
  • the iterative process includes performing a stereo reconstruction process using the plurality of stereo endoscopic images to yield a surface image corresponding to the anatomical area of interest; performing a registration process to align the surface image with a surface of the intraoperative three- dimensional model; and updating the registration error value based on a displacement of one or more points of the surface image resulting from the registration process.
  • Various techniques may be applied for registering the three-dimensional model to images from the laparoscope including, for example, manual alignment, calibration-based methods that use external tracking devices, landmark-based methods, and shape-based methods. This document recognizes that landmark-based methods may require compensating for the lack of landmarks across modalities. This document thus recognizes that landmark-based methods may not be functional for successful registration of images across imaging modalities. It does not enable a working solution to this registration problem.
  • Methods, image processing systems, systems and computer programs are provided for registration of tomographic imaging and endoscopic imaging.
  • An image processing system for registering tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest.
  • the image processing system comprises:
  • a data interface unit adapted to receive tomographic imaging data and endoscopic imaging data
  • At least one processor adapted to:
  • the image processing system shared structures in the segmented tomographic imaging data and the endoscopic imaging data are identified and utilized for registration purposes. Since the image processing system corresponds sub-surface structures of the segmented tomographic images with structures in the endoscopic imaging data, sufficient shared structures across the two imaging modalities are provided for accurate registration.
  • the tomographic imaging data may be pre-operative or intra-operative imaging data.
  • the tomographic imaging data is three dimensional imaging data within the region of interest.
  • the tomographic imaging data and the endoscopic imaging data overlap in the field of imaging at a surface imaged by the endoscopic imaging data to allow for identification of shared structures and registration based thereon.
  • the tomographic imaging data may be CT imaging data or MRI imaging data.
  • the imaging data may be angiogram imaging data.
  • rotational angiogram imaging data may be utilized.
  • the endoscopic imaging data may be laparoscopic imaging data.
  • the endoscopic imaging data may be two dimensional imaging data of a surface part of the region of interest.
  • the endoscopic imaging data may be video data.
  • the endoscopic imaging data may be such as to allow identification of subsurface structures therein using the endoscopic surface imaging data.
  • Image processing may be performed to enhance the endoscopic surface imaging data with respect to identifying the sub-surfaces structures.
  • common sub-surface structures from the two imaging modalities can be utilized for accurate registration thereof.
  • endoscopic imaging data is taken of the surface, it does have a limited sub-surface depth in the field of view.
  • the registration technique disclosed herein makes use of sub-surface capabilities of endoscopic imaging data to identify sub-surface structures in both the endoscopic imaging data and the tomographic imaging data for registration of the two imaging modalities.
  • an endoscopic camera is able to visualize sub-surface structures. Further, the camera may be adapted to capture data in an enhanced way to bring out these sub-surface structures and/or the data may be image processed to enhance visualization of the sub-surface structures, as described below.
  • the at least one processor may be adapted to perform a segmentation processing technique on the endoscopic imaging data to identify the structures in the endoscopic imaging data.
  • a segmentation image processing technique produces a model of the two dimensional endoscopic imaging data, which can be processed with respect to the model produced by the segmentation of the tomographic imaging data to identify common structures for registration. Segmentation of the endoscopic imaging data enhances identification of structures therein that can be aligned with corresponding sub-surface structures in the tomographic imaging data.
  • the endoscopic imaging data may be obtained from a spectral, hyperspectral, multispectral or thermographic imaging device.
  • the endoscopic imaging data is spectral, hyperspectral, multispectral or thermographic.
  • Such imaging allows the endoscopic imaging data to be enhanced for identifying sub-surface structures for registration with corresponding sub-surface structures in the tomographic imaging data.
  • spectral image processing can be performed on the endoscopic imaging data to differentiate the corresponding structures from surrounding tissue.
  • endoscopic imaging data may be obtained at plural wavelengths for each pixel to allow identification of a type of tissue, e.g. venous tissue versus organ tissue versus soft fatty tissue versus bone tissue, etc.
  • the type of tissue can be identified using the spectral image processing, which can then be used in the segmentation image processing technique described above to identify the corresponding structures.
  • the tomographic imaging data may be contrast enhanced. This step allows for enhanced differentiation of blood vessels from other tissue.
  • the at least one processor may be adapted to identify blood vessels as the subsurface structures of the tomographic imaging data and the structures of the endoscopic imaging data.
  • the at least one processor may be adapted to perform segmentation image processing techniques on both the tomographic imaging data and the endoscopic imaging data to identify blood vessels therein, which can be used as common structures for registering the two imaging modalities.
  • the at least one processor may be adapted to register the tomographic imaging data and the endoscopic imaging data using an elastic registration process.
  • deformation in the region of interest as a result of insufflation, manipulation with surgical tools, breathing and other causes can be determined from the endoscopic imaging data and used, based on the registration process, to update the tomographic imaging data. For example, if a part of a surface above a tumor is shifted in the endoscopic imaging data by a surgical tool, the tomographic imaging data can be corresponding deformed using the registered data from the two imaging modalities.
  • the at least one processor may be adapted to minimize a similarity measure which is determined based on alignment of the sub-surface structures of the tomographic imaging data and structures of the endoscopic imaging data as part of registering the two modalities of imaging data.
  • the at least one processor may perform the elastic registration process by including processing steps of iteratively shifting and/or bending the endoscopic imaging data with respect to the tomographic imaging data to minimize a measure of registration error determined based on registration of the sub-surface structures in the tomographic imaging data and the corresponding structures in the endoscopic imaging data.
  • the at least one processor may be adapted to initialize the elastic registration process based on tracking data from a tracking system for tracking a position of the endoscope. This step may increase processing efficiency by initializing the elastic registration process with realistic values. If the position of the endoscope is known, then the field of view is known, which can be used to initialize a search space for the registration process.
  • the at least on processor is adapted to initialize the elastic registration process by a likely position of the endoscopic imaging data within the tomographic imaging data based on the tracking data and/or by a realistic search space, determined based on the tracking data, within the tomographic imaging data for a position and/or bend of the endoscopic imaging data.
  • the system may comprise a display generation module configured to generate an integrated and registered display of endoscopic images based on the endoscopic imaging data and images based on the tomographic imaging data.
  • a display may include a spatially registered overlay of the tomographic imaging data and the endoscopic imaging data.
  • the display may react to deformation in the region of interest when an elastic registration process is utilized to update the displayed tomographic imaging data based on the deformation.
  • an imaging system comprising:
  • an endoscope for obtaining the endoscopic imaging data.
  • the imaging system may comprise a tracking system for tracking a position of the endoscope.
  • the tracking system may comprise at least one video camera associated with the imaging machine.
  • the at least one video camera may be connected to a detector of the imaging machine.
  • a computer readable medium having stored the computer program element having stored the computer program element.
  • FIG. 1 is a schematic view of an imaging system including an endoscope, an imaging machine, a display unit and an image processing system;
  • FIG. 2 is a flow chart of image registration
  • FIG. 3 is a schematic representation of registering imaging from different modalities in an elastic way
  • FIG. 4 is a flowchart of an exemplary elastic image registration algorithm
  • FIG. 5 is a representation of an exemplary elastic registration process with respect to the imaging data.
  • FIG. 1 discloses an imaging system 32, comprising an image processing system 10, a tomographic imaging machine 36 for obtaining tomographic imaging data, and an endoscope 12 for obtaining endoscopic imaging data.
  • the imaging system further comprises a tracking system 35 for tracking a position of the endoscope 12.
  • a display unit 24 for displaying registered and integrated tomographic and endoscopic imaging data.
  • the imaging system 32 is useful for assisting an endoscopic surgical procedure in which tomographic imaging data is registered with imaging data from the endoscope 12 by way of a registration process carried out by the image processing system 10.
  • a patient 33 is positioned on a patient support table 34.
  • Tomographic imaging data may be pre -operative data or intra-operative.
  • the endoscopic device 12 is utilized to view a surface region of interest of the patient 33, whilst the tomographic imaging data includes a three dimensional volume of the region of interest including the surface.
  • the registered imaging data is able to be displayed on the display unit 24 to assist an operative in directing surgical tools, including the endoscope 12.
  • the image processing system 10 is able to perform an elastic registration process so that the
  • tomographic imaging data accurately reflects tissue deformation captured by the endoscope 12 in the endoscopic imaging data.
  • the present disclosure is particularly suited to
  • the imaging machine 36 may be utilized pre-operatively or intra-operatively to generate the tomographic imaging data that is to be registered to the endoscopic imaging data.
  • the tomographic imaging data is three dimensional imaging data within the region of interest.
  • the tomographic imaging data and the endoscopic imaging data overlap in the field of imaging at a surface imaged by the endoscopic imaging data to allow for identification of shared structures and registration based thereon.
  • the tomographic imaging data may be CT imaging data or MRI imaging data.
  • the imaging data may be angiogram imaging data. For CT imaging data, rotational angiogram imaging data may be utilized.
  • the imaging machine 36 is a so called C-arm machine having a detector 20 and an X-ray source mounted to ends of a C-shaped arm 21.
  • the C-arm 21 is able to be rotated so that the detector 20 and the source 22 are directed at different angles relative to a patient 33. This allows slices of tomographic imaging data to be obtained of a region of interest of the patient 33. More specifically, the imaging machine 36 is able to obtain a three dimensional volume, optionally by a rotational CT image acquisition scan, such as cone -beam CT scanning, using motorized C-arm 21 motion when the imaging machine is of the C-arm kind.
  • a rotational CT image acquisition scan such as cone -beam CT scanning
  • the endoscope 12 generally includes an elongate insertion tube 14 for insertion in the patient 33, and a video camera 16 for obtaining the endoscopic imaging data (which is generally video data).
  • the endoscope 12 may also include one or more lenses to focus and direct images onto image capture elements (e.g. charged coupled device, CCDs) of the camera 16.
  • the camera 16 may be located at the proximal end of the endoscope 12 or at the distal end.
  • the endoscope may include an optical transmission medium (such as optical fiber) extending between distal and proximal ends of the elongate insertion tube 14.
  • the elongate insertion tube 14 is generally rigid in the laparoscopic system shown in the embodiment of FIG. 1.
  • the endoscope 12 includes an illuminator (not shown) for illumination of a surface part of a region of interest of the patient 33.
  • the imaging system 32 may comprise a first bracket 17 for mounting the endoscope 12 to the patient support table 34 and a second bracket 15 for mounting the endoscope 12 to the patient 33.
  • the brackets 15, 17 assist in securing the endoscope 12 relative to the patient 33.
  • the endoscope 12 using the camera 16 integrated therewith, is able to obtain optical image information for guiding minimally invasive surgical treatments.
  • Optical images may be acquired using the camera 16 inside the abdomen using gas or lift laparoscopy.
  • the abdominal wall is elevated to provide space for generating the optical images and for maneuvering surgical tools (including the endoscope 12).
  • the camera 16 is able to obtain surface imaging from an end of the endoscope 12 that is positioned below the skin through an incision, often within an inflated abdominal cavity. Such surface imaging is able to guide an operative in investigative procedures and is able to guide placement of surgical tools in interventional procedures.
  • the imaging data from the camera 16 is able to be displayed on the display unit 24 through use of the image processing system 10.
  • the camera 16 is a spectral imaging device.
  • spectral imaging devices obtain imaging data that include a spectral and an intensity component.
  • each spatial pixel of imaging data includes spectrally differentiated data.
  • Spectral cameras collect optical information as a set of images. Each image represents a wavelength range of the electromagnetic spectrum, also known as a spectral band. These images are combined to form a three-dimensional spectral data cube for processing and analysis by the image processing system 10, where x and y represent two spatial dimensions of the surface in the region of interest, and ⁇ represents the spectral dimension comprising a range of wavelengths. That is, a three-dimensional data cube can be realized with the spatial information in two dimensions and spectral information arranged in the third dimension.
  • the camera 12 and image processing system 10 may be able to spectrally split the incoming light into wavelength bins and intensity measurements are captured for each bin for each pixel, thereby providing spatial and spectral imaging data. This wavelength splitting can be performed using filters and/or chromatic dispersion components.
  • the spectral imaging camera 16 may be multispectral or hyperspectral. Hyperspectral imaging can include one hundred or more images taken at different wavelength ranges. A multispectral imaging device may be more concentrated in the wavelength ranges over which it takes images. For example, wavelength ranges may be selected that correspond to peak absorption or reflection wavelengths of structures of interest such as blood vessels, bones, organ tissue, tumor tissue, fat tissue, etc.
  • the camera 16 may also be able to obtain images in the infrared domain, which may assist in identifying tissue structures from the endoscopic imaging data.
  • the region of interest may have a thermal profile such that suitable infrared imaging would allow registration structures to be identified.
  • the above described spectral imaging device e.g. multispectral or hyperspectral
  • IRT infrared thermography
  • thermal imaging thermal video
  • thermography allows identification of variations in temperature in the region of interest, which can allow identification of structures of interest for use in the registration process.
  • the endoscope 12 may deliver light through the illuminator at specific wavelengths and the camera 16 may operate with specific wavelength filters for enhancing visualization of tissue structures such as blood vessels.
  • various techniques can be employed to allow the camera 16 and the image processing system 10 to enhance visualization and identification of structures in the surface imaging data for use in the registration process.
  • these techniques allow enhancement of visualization of sub-surface structures in the surface imaging data for registration with corresponding sub-surface structures in the tomographic imaging data.
  • the endoscope 12 delivers a sequence of video images such as thermography images, multispectral images or hyperspectral images, which enable the detection of structures for use in the registration process such as blood vessels (e.g. arteries and/or veins) in a two dimensional image of a surface part of the region of interest.
  • the image processing system 10 such as a general purpose computer, is operably connected to the imaging machine 18 and processes the imaging data from the imaging machine 18.
  • the processed imaging data may be presented on the display unit 24 of the imaging system 32.
  • the image processing system 10 comprises at least one processor 30.
  • the processor 30 is operably connected to a memory 28.
  • the processor 30 and the memory 28 may be connected through a bus 38.
  • the processor 30 may be any device capable of executing program instructions, such as one or more microprocessors.
  • the memory may be any volatile or non- volatile memory device, such as a removable disc, a hard drive, a CD, a Random Access Memory (RAM), a Read Only Memory (ROM), or the like.
  • the processor 30 may be embodied in a general purpose computer.
  • a display generation module 40 is also operably connected to the processor 30 through the bus 38.
  • the display generation module 40 is configured to generate, with the processor 30, display of images for the display unit 24.
  • the display generation module 40 may be implemented by hardware, software or a combination thereof.
  • the display generation module 40 may be included as programming instructions for the processor 30 and stored on the memory 28.
  • the memory 28 has encoded thereon, at least one computer program 44 or computer program element 44, providing instructions which are executable by the processor 30 to process images from the imaging machine 18 and from the endoscope 12.
  • a computer program 44 is also provided that performs a method of registering the imaging machine 18 and from the endoscope 12.
  • the computer program 44 is also adapted to implement features of the image processing system 10, as described further herein.
  • the image processing system 10 may be co-located with the imaging machine
  • the image processing system 10 may take on a distributed architecture.
  • the image processing system 10 includes a data interface unit 26 that is configured for receiving imaging data.
  • the data interface unit 26 may receive tomographic imaging data from the imaging machine 36, the memory 28, over a wireless network or from some other source.
  • the tomographic imaging data may be stored in the memory 28. Such data may be obtained pre-operatively by the same imaging machine 36 as used
  • the tracking system 35 comprises at least one video camera 37 associated with the imaging machine 36.
  • the at least one video camera 37 is connected to the detector 20 of the imaging machine 36.
  • plural video cameras 37 are included in the tracking system 35 that obtain video data of the endoscope 12 from different angles, thereby allowing determination of a position of the endoscope 12 in three dimensional space.
  • the image processing system 10 is configured, by operation of the computer program 44, to register tomographic imaging of a region of interest of a subject and endoscopic imaging from the endoscopic imaging device 12 including a surface of the region of interest.
  • the processor 26 is adapted to perform image processing and registration of the processed images as described in the flowchart of FIG. 2.
  • FIG. 2 shows a flow chart of steps performable by the computer program 44 disclosed herein, steps performed according to the computer implemented method disclosed herein and steps performed by the image processing system 10, through operation of the processor 30.
  • FIG. 3 shows exemplary imaging data that serve to illustrate the registration processes described herein.
  • step 100 three dimensional tomographic imaging data is segmented to identify sub-surface structures such as blood vessels, bones, tumours, soft tissue, etc.
  • the segmentation process can be performed using known algorithmic segmentation techniques to produce a model of the sub-surface structures.
  • the segmentation process is assisted where the tomographic imaging data is contrast enhanced since blood vessels are more clearly differentiable from surrounding tissue. For such data, a segmentation process with respect to blood vessels is preferred.
  • an exemplary image 162 of segmented tomographic imaging data is shown in the yz plane as a result of the segmentation step 100.
  • blood vessels 154, a surface organ, a tumor 152 and sub-surface structures 156, 158, 160 for use in registration.
  • the segmentation process of step 100 particularly has been utilized to identify sub-surface blood vessels including parts 156, 158, 160 thereof adjacent a surface of an organ 150.
  • step 100 implements a segmentation method to segment three dimensional spatial relation of sub-surface structures (e.g. blood vessels such as arteries and veins) from the tomographic imaging data of the region of interest.
  • sub-surface structures e.g. blood vessels such as arteries and veins
  • Step 102 structures, particularly sub-surface structures, in the endoscopic imaging data corresponding to the sub-surface structures in the tomographic imaging data are algorithmically identified.
  • Step 102 may include segmentation of the endoscopic imaging data from the endoscopic device 12 to identify sub-surface structures of the same kind as the sub-surface structures identified in the segmentation of the tomographic imaging data.
  • the segmentation may be with respect to blood vessels.
  • the two dimensional optical imaging data from the endoscopic device 12 can only "see” so deep beyond the physical surface (perhaps only to a depth beyond the physical surface boundary of between 1mm and 5mm).
  • the common sub-surface structures will be near surface structures identified in the endoscopic imaging data and the tomographic imaging data.
  • the endoscopic imaging data is spectral imaging data or thermographic imaging data.
  • the imaging data captured by the camera 16 of the endoscope 12 provides images in two spatial dimensions (x and y in FIG. 3) and having a spectral dimension. That is, each frame of imaging in video data obtained by the endoscopic camera 16 is made up of two dimensional images at plural (e.g. at least 2, 3, 5, 10, 100, etc.) different wavelength ranges or bins.
  • a spectral bin could be included corresponding to peak absorption of blood vessels and a spectral bin could be included for one or more other tissue types.
  • thermographic imaging infrared imaging data by the camera that is sufficiently sensitive to allow sub-surface structures to be identified based on those subsurface structures being a different temperature (e.g. warmer) than a surface or other surrounding structure in the image.
  • a thresholding technique could be applied such that hottest regions of the endoscopic imaging data are highlighted to facilitate vessel segmentation.
  • thermographic and spectral imaging could be applied.
  • the spectral imaging data may include two dimensional images taken in infrared wavelength ranges or bins (possibly in addition to spectral bins in the visible range). In FIG.
  • the surface of an organ 150 is imaged with the camera 16 of the endoscopic imaging device 12 and the two dimensional image 164 in the xy plane is shown.
  • a segmentation method is applied according to step 102 and near surface, sub-surface structures 156, 158, 160 are identified, which are parts of blood vessels in the illustrated embodiment. These structures 156, 158, 160 are able to be used for registration with the corresponding structures 156, 158, 160 identified in the segmented tomographic imaging data from step 100, as will be explained in the following.
  • the registration process exemplified by the flow chart of FIG. 2 includes a step 104 of registering the segmented surface imaging from step 102 with the segmented tomographic imaging data from step 100 by aligning the identified corresponding sub-surface structures.
  • Techniques for performing such registration once suitable structures for alignment have been identified, are known in the art such as Coherent Point Drift algorithm as briefly outlined below, other known non-rigid (elastic) generation techniques or the method outlined according to FIGS. 4 and 5 below could be used.
  • Such alignment can be iteratively performed and the registration technique may be a non-rigid one to ensure that deformation in the region of interest, as captured by the endoscopic imaging camera 16, is transformed to corresponding deformation in the registered tomographic imaging data.
  • the three dimensional tomographic data set may be registered to the endoscope 16 setup, for example prior to or post to abdominal wall lifting in laparoscopic procedures, so that a coordinate system of the tracking system 35 and a coordinate system of the tomographic imaging data can be registered.
  • the registration step 104 is performed for elastic registration of the two dimensional endoscopic imaging data and the three dimensional tomographic imaging data, which registers spectral (e.g. hyperspectral) two dimensional endoscopic images to the three dimensional tomographic data.
  • the registration may comprise estimating a relative position of the endoscope 12, particularly a distal viewing end of the endoscope camera 16, in relation to the tomographic imaging data.
  • the initial position of the endoscope 12 relative to a tissue layer of the tomographic imaging data can be estimated using an external tracking system such as the tracking system 35 which uses video data from cameras 37 fixed to the imaging machine 36. Since the position is known of the endoscope 12 relative to the imaging machine 36 due to the tracking system 35, the images of the endoscope and the tomographic images can be initially registered. Thereafter, sub-surface structures seen by the endoscope camera 16 can be used to register the two dimensional images when the same structures have also been identified in the tomographic images such as blood vessels. In this way, superficial structures such as blood vessels are used for registration.
  • the tracking system 35 is thus used to locate the position of the endoscope 12 in three dimensional space and in the three dimensional coordinate system of the
  • tomographic imaging data since the coordinate system of the tracking system and that of the endoscope have been registered. This allows initial registration of the two dimensional imaging data in the three dimensional imaging data.
  • the initial registration can be elastically refined to take into account deformations by registering sub-surface structures as described above.
  • a registered and integrated image 166 as a result of the registration process can be seen.
  • the segmented three dimensional images 162 have been morphed with respect to the surface two dimensional image 164 to align the identified subsurface structures 156, 158, 160 in order to render registered image 166.
  • Such a process is iterative ly performed to produce an integrated and registered video.
  • Such images allow a surgeon to see the surface imaging and the three dimensional segmented model at the same time on the display unit 24 to accurately guide interventional procedures (and other procedures). In particular, it may allow a surgeon to avoid non- superficial blood vessels in making incisions that are too deep to visualize from just the endoscopic imaging data (even if enhanced as described above).
  • FIGS. 3(D) and 3(E) images 168, 170 relevant to elastic registration are shown.
  • FIG. 3(D) a two dimensional image 169 from the endoscopic camera 16 can be seen in which deformation has occurred. Such deformation may be as a result of
  • the elastic registration process iteratively transforms the three dimensional imaging data to the deformed two dimensional images 168 by aligning the corresponding sub-surface structures 156, 158, 160.
  • Such deformation is tracked in the registered and integrated image 170 shown in FIG. 3(E). From FIG. 3(E), it can be seen that the tumor 152 and the blood vessels 154 shown in the three dimensional part of the image 170 have been deformed to match the deformation shown by the two dimensional image 168.
  • the surgeon is guided to take into account the deformed position of the tumor 152 and blood vessels 154 when carrying out a procedure.
  • the segmentation step 100 results in soft tissue differentiation to identify tumor and tumor-like pathologies 152, as shown in FIG. 3, that cannot be observed by superficial surface imaging as performed by the endoscope 12 (e.g. a laparoscope 12).
  • the endoscope 12 e.g. a laparoscope 12
  • Deformation of the target organ 150 e.g. liver, kidney, esophagus, etc.
  • soft tissue structure which can happen during surgical instrument-organ interaction
  • the endoscope 12 is observed by the endoscope 12 as shown in FIG. 3(D).
  • the tomographic images taken by the imaging machine 36 which may be pre -operative images
  • the tomographic images are deformed to match the sub-surface structures 156, 158, 160 in the two imaging modalities.
  • superficial structures such as the blood vessels 156, 158, 160 can be used for elastic registration, as described above with respect to step 104.
  • the tumor 152 may now be deformed and displaced in the registered images as shown by comparing the image 166 of FIG. 3(E) with the image 170 of FIG. 3(C).
  • a step 106 of displaying registered and integrated tomographic and endoscopic imaging data is provided.
  • a display is provided on the display unit 24 and generated by the display generation module 40.
  • a video overlay of the elastically deformed and registered tomographic images and the endoscopic surfaces images may be provided.
  • the two dimensional images from the endoscopic camera 16 may be presented as partly see-through to allow an overlay of the three dimensional model obtained in step 102 to be viewed to give an impression of depth and to be able to see deeper than allowed by the endoscopic imaging.
  • a video of registered images may be displayed as shown in FIGS. 3(C) and 3(E).
  • step 106 it is possible to display an integrated endoscopic image showing the superficial vessels from the optical, endoscopic images (either as a result of spectral imaging as described above and/or segmentation according to step 102) as well as the tissue behind the surface layer as provided by the tomographic imaging data to give an indication of upcoming structures when advancing the endoscope 12.
  • step 200 An exemplary registration technique to be used in step 104 is described at a general level with reference to the flow chart of FIG. 4 and the schematic data transformation illustrations shown in FIG. 5.
  • the tomographic imaging data is received, which may be preoperative imaging data.
  • imaging data 250 is schematically illustrated in FIG. 5(a), which shows an organ 150.
  • a position of the endoscope relative to the tomographic imaging data is estimated based on tracking data from the video tracking system 35. From this, a field of view within the tomographic imaging data of the endoscopic camera 16 can be estimated including a depth along a z axis (which coincides with a longitudinal axis of the instruction tube 14 of the endoscope 12), an area of the field of view, an angle of an xy plane being viewed, etc..
  • Such relative positioning of the endoscope 12 is schematically illustrated in the imaging data 252 represent in FIG. 5 (b). Also shown in FIG. 5(b) is a possibly deformed organ 150', which can be viewed by the endoscopic camera 16, but which has yet to be transformed to the tomographic imaging data.
  • a search space is initialized based on the estimated field of view.
  • the search space may include a range of x, y and z coordinates to be searched in the tomographic imaging data and a coherent bending range to be searched, in order to provide constraints to the search space.
  • the z direction is a depth direction, whereas x and y directions are perpendicular to each other in a plane perpendicular to the z direction.
  • step 206 an image slab obtained from the endoscopic camera 16 is received.
  • a representation 256 is provided of the image slab 253 seated into the tomographic imaging data at a most likely position based on the estimated field of view from step 202.
  • step 208 the image slab 253 is shifted along the x, y and z axis iteratively within the search space determined in step 204.
  • a similarity measure is determined based on calculated alignment of the sub-surface structures. The iterations are performed with the aim of minimizing the similarity measure to determine an optimally registered image slab 253 in the tomographic imaging data along the x, y and z directions assuming a rigid image slab 253.
  • an image 256 of the rigid x, y and z shifting of the image slab 253 within the search space is represented.
  • step 210 the image slab 253 is no longer treated rigidly and is iteratively bent within the determined bend range of the search space from step 204.
  • the bending is performed with a view to maintaining a coherent image slab, without discontinuities and within reasonable limits for a surface of body tissue e.g. an organ. Such limits can be predetermined from experimentation.
  • the similarity measure is determined at each iteration and the search continues within the search space to minimize the similarity measure. Referring to FIG. 5(e), a representation 258 of bending of the image slab 253 within the tomographic imaging data is shown.
  • step 212 the similarity measure has been minimized in steps 208 and 210 to obtain an elastic registration of the two dimensional endoscopic imaging data and the three dimensional tomographic imaging data.
  • Such an elastic registration provides a registration transformation between the two imaging modalities so that a display can be generated of the two registered and integrated imaging modalities.
  • Such registered imaging data is shown in the image 260 of FIG. 5(f).
  • a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate processing system.
  • the computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention.
  • This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus.
  • the computing unit can be adapted to operate automatically and/or to execute the orders of a user.
  • a computer program may be loaded into a working memory of a data processor.
  • the data processor may thus be equipped to carry out the method of the invention.
  • This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
  • the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
  • a computer readable medium such as a CD-ROM
  • the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
  • a computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
  • the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network.
  • a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.

Abstract

A segmentation processing technique is performed on the tomographic imaging data to identify sub-surface structures. A segmentation processing technique is performed on the endoscopic imaging data to identify sub-surface structures corresponding to the identified sub-surface structures in the tomographic imaging data. The tomographic imaging data and the endoscopic imaging data are registered based on registration of the sub- surface structures identified in the endoscopic imaging data and the sub-surface structures identified in the tomographic imaging data. An integrated and registered display of the tomographic and endoscopic imaging data is provided.

Description

REGISTERING TOMOGRAPHIC IMAGING AND ENDOSCOPIC IMAGING
FIELD OF THE INVENTION
The technical field generally relates to registering tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest. In particular, the endoscopic imaging is laparoscopic imaging.
BACKGROUND OF THE INVENTION
Laparoscopic surgery, also called minimally invasive surgery (MIS), bandaid surgery, or keyhole surgery, is a surgical technique using a laparoscope. The laparoscope generally includes a video camera and a light source to illuminate the operative field. For abdominal laparoscopic surgery, the abdomen is usually insufflated with, for example, carbon dioxide gas. This elevates the abdominal wall above the internal organs to create a working and viewing space. Laparoscopic surgery may include operations within the abdominal or pelvic cavities. A laparoscopic surgery may involve use of surgical instruments such as: forceps, scissors, probes, dissectors, hooks, retractors, etc. Laparoscopic and thoracoscopic surgery belong to the broader field of endoscopy.
Exemplary laparoscopic procedures are removal of the gallbladder, removal of the appendix, removal of patches of endometriosis, removal of parts of the intestines, female sterilization, treating ectopic pregnancy, taking a biopsy of various structures inside the abdomen, which can be looked at under the microscope and/or tested in other ways, investigative procedures, etc.
To plan the surgery, pre-operative imaging (e.g., computed tomography (CT) or magnetic resonance imaging (MRI)) is typically used to identify vessels and abnormal tissue and create a three-dimensional segmented model. It is common for this three- dimensional segmented model to be made available during surgery on a monitor, while images generated via the laparoscope are displayed on a separate monitor. Although this provides additional information to the surgeon that may be useful in performing the surgery, it is generally challenging and error-prone to fuse the three-dimensional pre-operative models and laparoscopic images into a single space. For example, the tissue in the region of interest may be deformed compared to the pre-operative model due to patient respiration, movement between the pre-operative and intraoperative settings, insufflation, manipulation of the anatomical structures in the region of interest, etc. Additionally, the laparoscopic camera generally provides a small field of view that limits the amount of information that may be used for registration of the pre-operative model. Moreover, for some tissues, registration is further complicated due to a lack of tissue surface texture and the lack of common landmarks across modalities.
US 2014/0241600 discloses a system for performing a combined surface reconstruction and registration of stereo laparoscopic images during a surgical operation. The system includes a rotational angiography system, a receiver module, and an imaging computer. The rotational angiography system is configured to generate an intraoperative three-dimensional model of an anatomical area of interest. The receiver module is configured to receive a plurality of stereo endoscopic images of the anatomical area of interest from a laparoscope. The imaging computer is configured to perform an iterative process a plurality of times until a registration error value is below a threshold value. The iterative process includes performing a stereo reconstruction process using the plurality of stereo endoscopic images to yield a surface image corresponding to the anatomical area of interest; performing a registration process to align the surface image with a surface of the intraoperative three- dimensional model; and updating the registration error value based on a displacement of one or more points of the surface image resulting from the registration process. Various techniques may be applied for registering the three-dimensional model to images from the laparoscope including, for example, manual alignment, calibration-based methods that use external tracking devices, landmark-based methods, and shape-based methods. This document recognizes that landmark-based methods may require compensating for the lack of landmarks across modalities. This document thus recognizes that landmark-based methods may not be functional for successful registration of images across imaging modalities. It does not enable a working solution to this registration problem.
Thus, it is desired to provide a technique that allows accurate registration of tomographic imaging and endoscopic surface imaging that is able to make use of shared structures in the images. Further, it is desirable to accurately adjust for tissue deformation.
SUMMARY OF THE INVENTION
Hence, there may be a need to provide an improved and facilitated way of registering tomographic imaging and endoscopic surface imaging. The object of the present invention is solved by the subject-matter of the independent claims; wherein further embodiments are incorporated in the dependent claims. It should be noted that the following described aspects of the invention apply also for the image processing system, for the system, and for the computer implemented method as well as for the computer program element and the computer readable medium.
Methods, image processing systems, systems and computer programs are provided for registration of tomographic imaging and endoscopic imaging.
An image processing system is provided for registering tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest. The image processing system comprises:
a data interface unit adapted to receive tomographic imaging data and endoscopic imaging data;
at least one processor adapted to:
perform a segmentation processing technique on the tomographic imaging data to identify sub-surface structures;
identify structures in the endoscopic imaging data corresponding to the subsurface structures in the tomographic imaging data; and
register the tomographic imaging data and the endoscopic imaging data based on registration of the structures identified in the endoscopic imaging data and the sub-surface structures identified from segmentation of the tomographic imaging data.
According to the image processing system, shared structures in the segmented tomographic imaging data and the endoscopic imaging data are identified and utilized for registration purposes. Since the image processing system corresponds sub-surface structures of the segmented tomographic images with structures in the endoscopic imaging data, sufficient shared structures across the two imaging modalities are provided for accurate registration.
The tomographic imaging data may be pre-operative or intra-operative imaging data.
The tomographic imaging data is three dimensional imaging data within the region of interest. The tomographic imaging data and the endoscopic imaging data overlap in the field of imaging at a surface imaged by the endoscopic imaging data to allow for identification of shared structures and registration based thereon. The tomographic imaging data may be CT imaging data or MRI imaging data. The imaging data may be angiogram imaging data. For CT imaging data, rotational angiogram imaging data may be utilized.
The endoscopic imaging data may be laparoscopic imaging data. The endoscopic imaging data may be two dimensional imaging data of a surface part of the region of interest.
The endoscopic imaging data may be video data.
The endoscopic imaging data may be such as to allow identification of subsurface structures therein using the endoscopic surface imaging data. Image processing may be performed to enhance the endoscopic surface imaging data with respect to identifying the sub-surfaces structures. In this way, common sub-surface structures from the two imaging modalities can be utilized for accurate registration thereof. In particular, although endoscopic imaging data is taken of the surface, it does have a limited sub-surface depth in the field of view. The registration technique disclosed herein makes use of sub-surface capabilities of endoscopic imaging data to identify sub-surface structures in both the endoscopic imaging data and the tomographic imaging data for registration of the two imaging modalities. As an everyday example of the capability to visualize sub-surface structures from two dimensional optical imaging, the reader will appreciate that it is possible to see sub-surface veins of the hands based on color differentiation with the human eye. Likewise, an endoscopic camera is able to visualize sub-surface structures. Further, the camera may be adapted to capture data in an enhanced way to bring out these sub-surface structures and/or the data may be image processed to enhance visualization of the sub-surface structures, as described below.
The at least one processor may be adapted to perform a segmentation processing technique on the endoscopic imaging data to identify the structures in the endoscopic imaging data. Such a segmentation image processing technique produces a model of the two dimensional endoscopic imaging data, which can be processed with respect to the model produced by the segmentation of the tomographic imaging data to identify common structures for registration. Segmentation of the endoscopic imaging data enhances identification of structures therein that can be aligned with corresponding sub-surface structures in the tomographic imaging data.
The endoscopic imaging data may be obtained from a spectral, hyperspectral, multispectral or thermographic imaging device. Thus, the endoscopic imaging data is spectral, hyperspectral, multispectral or thermographic. Such imaging allows the endoscopic imaging data to be enhanced for identifying sub-surface structures for registration with corresponding sub-surface structures in the tomographic imaging data. In particular, spectral image processing can be performed on the endoscopic imaging data to differentiate the corresponding structures from surrounding tissue. For example, endoscopic imaging data may be obtained at plural wavelengths for each pixel to allow identification of a type of tissue, e.g. venous tissue versus organ tissue versus soft fatty tissue versus bone tissue, etc. The type of tissue can be identified using the spectral image processing, which can then be used in the segmentation image processing technique described above to identify the corresponding structures.
The tomographic imaging data may be contrast enhanced. This step allows for enhanced differentiation of blood vessels from other tissue.
The at least one processor may be adapted to identify blood vessels as the subsurface structures of the tomographic imaging data and the structures of the endoscopic imaging data. The at least one processor may be adapted to perform segmentation image processing techniques on both the tomographic imaging data and the endoscopic imaging data to identify blood vessels therein, which can be used as common structures for registering the two imaging modalities.
The at least one processor may be adapted to register the tomographic imaging data and the endoscopic imaging data using an elastic registration process. In this way, deformation in the region of interest as a result of insufflation, manipulation with surgical tools, breathing and other causes can be determined from the endoscopic imaging data and used, based on the registration process, to update the tomographic imaging data. For example, if a part of a surface above a tumor is shifted in the endoscopic imaging data by a surgical tool, the tomographic imaging data can be corresponding deformed using the registered data from the two imaging modalities.
The at least one processor may be adapted to minimize a similarity measure which is determined based on alignment of the sub-surface structures of the tomographic imaging data and structures of the endoscopic imaging data as part of registering the two modalities of imaging data.
The at least one processor may perform the elastic registration process by including processing steps of iteratively shifting and/or bending the endoscopic imaging data with respect to the tomographic imaging data to minimize a measure of registration error determined based on registration of the sub-surface structures in the tomographic imaging data and the corresponding structures in the endoscopic imaging data. The at least one processor may be adapted to initialize the elastic registration process based on tracking data from a tracking system for tracking a position of the endoscope. This step may increase processing efficiency by initializing the elastic registration process with realistic values. If the position of the endoscope is known, then the field of view is known, which can be used to initialize a search space for the registration process. Thus, the at least on processor is adapted to initialize the elastic registration process by a likely position of the endoscopic imaging data within the tomographic imaging data based on the tracking data and/or by a realistic search space, determined based on the tracking data, within the tomographic imaging data for a position and/or bend of the endoscopic imaging data.
The system may comprise a display generation module configured to generate an integrated and registered display of endoscopic images based on the endoscopic imaging data and images based on the tomographic imaging data. Such a display may include a spatially registered overlay of the tomographic imaging data and the endoscopic imaging data. Furthermore, the display may react to deformation in the region of interest when an elastic registration process is utilized to update the displayed tomographic imaging data based on the deformation.
Also disclosed is an imaging system, comprising:
the image processing system described above;
and at least one of:
a tomographic imaging machine for obtaining the tomographic imaging data; and
an endoscope for obtaining the endoscopic imaging data.
The imaging system may comprise a tracking system for tracking a position of the endoscope.
The tracking system may comprise at least one video camera associated with the imaging machine. For example, the at least one video camera may be connected to a detector of the imaging machine.
Also disclosed is a computer implemented method for registering tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest, the method comprising:
receiving tomographic imaging data and endoscopic imaging data;
performing a segmentation processing technique on the tomographic imaging data to identify sub-surface structures;
identifying structures in the endoscopic imaging data corresponding to the sub-surface structures in the tomographic imaging data; and
registering the tomographic imaging data and the endoscopic imaging data based on registration of the structures identified in the endoscopic imaging data and the subsurface structures of the model produced from segmentation of the tomographic imaging data.
Also disclosed is a computer program element adapted to implement an image processing system as described herein or adapted to perform the method steps described herein when executed by at least one processor.
A computer readable medium having stored the computer program element.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
DESCRIPTION OF THE DRAWINGS
The exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
FIG. 1 is a schematic view of an imaging system including an endoscope, an imaging machine, a display unit and an image processing system;
FIG. 2 is a flow chart of image registration;
FIG. 3 is a schematic representation of registering imaging from different modalities in an elastic way;
FIG. 4 is a flowchart of an exemplary elastic image registration algorithm; and FIG. 5 is a representation of an exemplary elastic registration process with respect to the imaging data. DETAILED DESCRIPTION
The following detailed description is merely exemplary in nature and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.
FIG. 1 discloses an imaging system 32, comprising an image processing system 10, a tomographic imaging machine 36 for obtaining tomographic imaging data, and an endoscope 12 for obtaining endoscopic imaging data. The imaging system further comprises a tracking system 35 for tracking a position of the endoscope 12. Also included in the imaging system is a display unit 24 for displaying registered and integrated tomographic and endoscopic imaging data.
The imaging system 32 is useful for assisting an endoscopic surgical procedure in which tomographic imaging data is registered with imaging data from the endoscope 12 by way of a registration process carried out by the image processing system 10. A patient 33 is positioned on a patient support table 34. Tomographic imaging data may be pre -operative data or intra-operative. The endoscopic device 12 is utilized to view a surface region of interest of the patient 33, whilst the tomographic imaging data includes a three dimensional volume of the region of interest including the surface. The registered imaging data is able to be displayed on the display unit 24 to assist an operative in directing surgical tools, including the endoscope 12. In particular, while the surface of inner organs is generally visible from the endoscopic imaging data, deeper structures like non-superficial blood vessels or structures in-between or behind organs will not be visible. Fusion or registration of two dimensional optical images from the endoscope 12 with three dimensional tomographic data from the imaging machine 36 can allow a surgeon to take deeper structures into
consideration, such as when making incisions or other interventions with surgical tools. For example, non-superficial blood vessels or tumors can be avoided. Further, the image processing system 10 is able to perform an elastic registration process so that the
tomographic imaging data accurately reflects tissue deformation captured by the endoscope 12 in the endoscopic imaging data. The present disclosure is particularly suited to
laparoscopic procedures, where the endoscope 12 is a laparoscope.
The imaging machine 36 may be utilized pre-operatively or intra-operatively to generate the tomographic imaging data that is to be registered to the endoscopic imaging data. The tomographic imaging data is three dimensional imaging data within the region of interest. The tomographic imaging data and the endoscopic imaging data overlap in the field of imaging at a surface imaged by the endoscopic imaging data to allow for identification of shared structures and registration based thereon. The tomographic imaging data may be CT imaging data or MRI imaging data. The imaging data may be angiogram imaging data. For CT imaging data, rotational angiogram imaging data may be utilized. In the shown embodiment, the imaging machine 36 is a so called C-arm machine having a detector 20 and an X-ray source mounted to ends of a C-shaped arm 21. The C-arm 21 is able to be rotated so that the detector 20 and the source 22 are directed at different angles relative to a patient 33. This allows slices of tomographic imaging data to be obtained of a region of interest of the patient 33. More specifically, the imaging machine 36 is able to obtain a three dimensional volume, optionally by a rotational CT image acquisition scan, such as cone -beam CT scanning, using motorized C-arm 21 motion when the imaging machine is of the C-arm kind.
The endoscope 12 generally includes an elongate insertion tube 14 for insertion in the patient 33, and a video camera 16 for obtaining the endoscopic imaging data (which is generally video data). The endoscope 12 may also include one or more lenses to focus and direct images onto image capture elements (e.g. charged coupled device, CCDs) of the camera 16. The camera 16 may be located at the proximal end of the endoscope 12 or at the distal end. When at the proximal end, the endoscope may include an optical transmission medium (such as optical fiber) extending between distal and proximal ends of the elongate insertion tube 14. The elongate insertion tube 14 is generally rigid in the laparoscopic system shown in the embodiment of FIG. 1. In embodiments, the endoscope 12 includes an illuminator (not shown) for illumination of a surface part of a region of interest of the patient 33. The imaging system 32 may comprise a first bracket 17 for mounting the endoscope 12 to the patient support table 34 and a second bracket 15 for mounting the endoscope 12 to the patient 33. The brackets 15, 17 assist in securing the endoscope 12 relative to the patient 33.
The endoscope 12, using the camera 16 integrated therewith, is able to obtain optical image information for guiding minimally invasive surgical treatments. Optical images may be acquired using the camera 16 inside the abdomen using gas or lift laparoscopy. Here, the abdominal wall is elevated to provide space for generating the optical images and for maneuvering surgical tools (including the endoscope 12).
The camera 16 is able to obtain surface imaging from an end of the endoscope 12 that is positioned below the skin through an incision, often within an inflated abdominal cavity. Such surface imaging is able to guide an operative in investigative procedures and is able to guide placement of surgical tools in interventional procedures. The imaging data from the camera 16 is able to be displayed on the display unit 24 through use of the image processing system 10.
In exemplary embodiments, the camera 16 is a spectral imaging device. Such spectral imaging devices obtain imaging data that include a spectral and an intensity component. For example, each spatial pixel of imaging data includes spectrally differentiated data. Spectral cameras collect optical information as a set of images. Each image represents a wavelength range of the electromagnetic spectrum, also known as a spectral band. These images are combined to form a three-dimensional spectral data cube for processing and analysis by the image processing system 10, where x and y represent two spatial dimensions of the surface in the region of interest, and λ represents the spectral dimension comprising a range of wavelengths. That is, a three-dimensional data cube can be realized with the spatial information in two dimensions and spectral information arranged in the third dimension. Often this will mean that for each two dimensional position in the imaging data (usually a pixel), there will be a set of intensity data corresponding to each resolved wavelength range. The camera 12 and image processing system 10 may be able to spectrally split the incoming light into wavelength bins and intensity measurements are captured for each bin for each pixel, thereby providing spatial and spectral imaging data. This wavelength splitting can be performed using filters and/or chromatic dispersion components. The spectral imaging camera 16 may be multispectral or hyperspectral. Hyperspectral imaging can include one hundred or more images taken at different wavelength ranges. A multispectral imaging device may be more concentrated in the wavelength ranges over which it takes images. For example, wavelength ranges may be selected that correspond to peak absorption or reflection wavelengths of structures of interest such as blood vessels, bones, organ tissue, tumor tissue, fat tissue, etc.
The camera 16 may also be able to obtain images in the infrared domain, which may assist in identifying tissue structures from the endoscopic imaging data. The region of interest may have a thermal profile such that suitable infrared imaging would allow registration structures to be identified. The above described spectral imaging device (e.g. multispectral or hyperspectral) may operate in the infrared domain and optionally also the visible range so that spectrally differentiated infrared imaging is utilized. Examples of applicable infrared imaging would be infrared thermography IRT, thermal imaging, and thermal video. The amount of radiation emitted by structures within the region of interest increases with temperature; therefore, thermography allows identification of variations in temperature in the region of interest, which can allow identification of structures of interest for use in the registration process.
In various exemplary embodiments, the endoscope 12 may deliver light through the illuminator at specific wavelengths and the camera 16 may operate with specific wavelength filters for enhancing visualization of tissue structures such as blood vessels.
Thus, various techniques can be employed to allow the camera 16 and the image processing system 10 to enhance visualization and identification of structures in the surface imaging data for use in the registration process. Generally, these techniques allow enhancement of visualization of sub-surface structures in the surface imaging data for registration with corresponding sub-surface structures in the tomographic imaging data. In operation, the endoscope 12 delivers a sequence of video images such as thermography images, multispectral images or hyperspectral images, which enable the detection of structures for use in the registration process such as blood vessels (e.g. arteries and/or veins) in a two dimensional image of a surface part of the region of interest.
The image processing system 10, such as a general purpose computer, is operably connected to the imaging machine 18 and processes the imaging data from the imaging machine 18. The processed imaging data may be presented on the display unit 24 of the imaging system 32.
The image processing system 10 comprises at least one processor 30. The processor 30 is operably connected to a memory 28. The processor 30 and the memory 28 may be connected through a bus 38. The processor 30 may be any device capable of executing program instructions, such as one or more microprocessors. The memory may be any volatile or non- volatile memory device, such as a removable disc, a hard drive, a CD, a Random Access Memory (RAM), a Read Only Memory (ROM), or the like. Moreover, the processor 30 may be embodied in a general purpose computer.
A display generation module 40 is also operably connected to the processor 30 through the bus 38. The display generation module 40 is configured to generate, with the processor 30, display of images for the display unit 24. The display generation module 40 may be implemented by hardware, software or a combination thereof. The display generation module 40 may be included as programming instructions for the processor 30 and stored on the memory 28.
The memory 28 has encoded thereon, at least one computer program 44 or computer program element 44, providing instructions which are executable by the processor 30 to process images from the imaging machine 18 and from the endoscope 12. In addition to the computer program 44 for processing the imaging data for presentation on the display unit 24, a computer program 44 is also provided that performs a method of registering
tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest. The computer program 44 is also adapted to implement features of the image processing system 10, as described further herein.
The image processing system 10 may be co-located with the imaging machine
28 or remotely located or the image processing system 10 may take on a distributed architecture.
The image processing system 10 includes a data interface unit 26 that is configured for receiving imaging data. The data interface unit 26 may receive tomographic imaging data from the imaging machine 36, the memory 28, over a wireless network or from some other source. The tomographic imaging data may be stored in the memory 28. Such data may be obtained pre-operatively by the same imaging machine 36 as used
intraoperative ly or by a different imaging machine 36.
In the exemplary embodiment of FIG. 1 , the tracking system 35 comprises at least one video camera 37 associated with the imaging machine 36. In the present embodiment, the at least one video camera 37 is connected to the detector 20 of the imaging machine 36. In fact, plural video cameras 37 are included in the tracking system 35 that obtain video data of the endoscope 12 from different angles, thereby allowing determination of a position of the endoscope 12 in three dimensional space.
The image processing system 10 is configured, by operation of the computer program 44, to register tomographic imaging of a region of interest of a subject and endoscopic imaging from the endoscopic imaging device 12 including a surface of the region of interest. In particular, the processor 26 is adapted to perform image processing and registration of the processed images as described in the flowchart of FIG. 2.
FIG. 2 shows a flow chart of steps performable by the computer program 44 disclosed herein, steps performed according to the computer implemented method disclosed herein and steps performed by the image processing system 10, through operation of the processor 30. FIG. 3 shows exemplary imaging data that serve to illustrate the registration processes described herein.
In step 100, three dimensional tomographic imaging data is segmented to identify sub-surface structures such as blood vessels, bones, tumours, soft tissue, etc. The segmentation process can be performed using known algorithmic segmentation techniques to produce a model of the sub-surface structures. In particular, the segmentation process is assisted where the tomographic imaging data is contrast enhanced since blood vessels are more clearly differentiable from surrounding tissue. For such data, a segmentation process with respect to blood vessels is preferred.
Referring to FIG. 3 (A), an exemplary image 162 of segmented tomographic imaging data is shown in the yz plane as a result of the segmentation step 100. There is shown blood vessels 154, a surface organ, a tumor 152 and sub-surface structures 156, 158, 160 for use in registration. The segmentation process of step 100 particularly has been utilized to identify sub-surface blood vessels including parts 156, 158, 160 thereof adjacent a surface of an organ 150. Accordingly, step 100 implements a segmentation method to segment three dimensional spatial relation of sub-surface structures (e.g. blood vessels such as arteries and veins) from the tomographic imaging data of the region of interest.
Turning back to FIG. 2, in step 102, structures, particularly sub-surface structures, in the endoscopic imaging data corresponding to the sub-surface structures in the tomographic imaging data are algorithmically identified. Step 102 may include segmentation of the endoscopic imaging data from the endoscopic device 12 to identify sub-surface structures of the same kind as the sub-surface structures identified in the segmentation of the tomographic imaging data. Thus, the segmentation may be with respect to blood vessels. It will be appreciated that the two dimensional optical imaging data from the endoscopic device 12 can only "see" so deep beyond the physical surface (perhaps only to a depth beyond the physical surface boundary of between 1mm and 5mm). As such, the common sub-surface structures will be near surface structures identified in the endoscopic imaging data and the tomographic imaging data.
In order to enhance an ability to segment the endoscopic imaging data, i.e. the ability to accurately differentiate structures of interest from surrounding tissue, the endoscopic imaging data is spectral imaging data or thermographic imaging data. For spectral imaging, the imaging data captured by the camera 16 of the endoscope 12 provides images in two spatial dimensions (x and y in FIG. 3) and having a spectral dimension. That is, each frame of imaging in video data obtained by the endoscopic camera 16 is made up of two dimensional images at plural (e.g. at least 2, 3, 5, 10, 100, etc.) different wavelength ranges or bins. For example, a spectral bin could be included corresponding to peak absorption of blood vessels and a spectral bin could be included for one or more other tissue types. By applying an image processing technique to contrast images in these two spectral bins (e.g. by a form of subtraction), endoscopic imaging data can be enhanced for blood vessel
segmentation. For thermographic imaging, infrared imaging data by the camera that is sufficiently sensitive to allow sub-surface structures to be identified based on those subsurface structures being a different temperature (e.g. warmer) than a surface or other surrounding structure in the image. For example, it can be expected that relatively hot blood vessels could be differentiated in this way. Thus, a thresholding technique could be applied such that hottest regions of the endoscopic imaging data are highlighted to facilitate vessel segmentation. A combination of thermographic and spectral imaging could be applied. For example, the spectral imaging data may include two dimensional images taken in infrared wavelength ranges or bins (possibly in addition to spectral bins in the visible range). In FIG. 3(B), the surface of an organ 150 is imaged with the camera 16 of the endoscopic imaging device 12 and the two dimensional image 164 in the xy plane is shown. A segmentation method is applied according to step 102 and near surface, sub-surface structures 156, 158, 160 are identified, which are parts of blood vessels in the illustrated embodiment. These structures 156, 158, 160 are able to be used for registration with the corresponding structures 156, 158, 160 identified in the segmented tomographic imaging data from step 100, as will be explained in the following.
The registration process exemplified by the flow chart of FIG. 2 includes a step 104 of registering the segmented surface imaging from step 102 with the segmented tomographic imaging data from step 100 by aligning the identified corresponding sub-surface structures. Techniques for performing such registration, once suitable structures for alignment have been identified, are known in the art such as Coherent Point Drift algorithm as briefly outlined below, other known non-rigid (elastic) generation techniques or the method outlined according to FIGS. 4 and 5 below could be used. Such alignment can be iteratively performed and the registration technique may be a non-rigid one to ensure that deformation in the region of interest, as captured by the endoscopic imaging camera 16, is transformed to corresponding deformation in the registered tomographic imaging data. By making use of sub-surface structures in both modalities of imaging data, particularly blood vessels, as registration landmarks and by obtaining such structures from respective segmentation processes (particularly where the segmentation of the endoscopic imaging data is enhanced as described above) sufficient quantity and quality of common landmarks are available for an accurate elastic deformation technique.
The three dimensional tomographic data set may be registered to the endoscope 16 setup, for example prior to or post to abdominal wall lifting in laparoscopic procedures, so that a coordinate system of the tracking system 35 and a coordinate system of the tomographic imaging data can be registered. The registration step 104 is performed for elastic registration of the two dimensional endoscopic imaging data and the three dimensional tomographic imaging data, which registers spectral (e.g. hyperspectral) two dimensional endoscopic images to the three dimensional tomographic data. The registration may comprise estimating a relative position of the endoscope 12, particularly a distal viewing end of the endoscope camera 16, in relation to the tomographic imaging data. The initial position of the endoscope 12 relative to a tissue layer of the tomographic imaging data can be estimated using an external tracking system such as the tracking system 35 which uses video data from cameras 37 fixed to the imaging machine 36. Since the position is known of the endoscope 12 relative to the imaging machine 36 due to the tracking system 35, the images of the endoscope and the tomographic images can be initially registered. Thereafter, sub-surface structures seen by the endoscope camera 16 can be used to register the two dimensional images when the same structures have also been identified in the tomographic images such as blood vessels. In this way, superficial structures such as blood vessels are used for registration.
The tracking system 35 is thus used to locate the position of the endoscope 12 in three dimensional space and in the three dimensional coordinate system of the
tomographic imaging data, since the coordinate system of the tracking system and that of the endoscope have been registered. This allows initial registration of the two dimensional imaging data in the three dimensional imaging data. The initial registration can be elastically refined to take into account deformations by registering sub-surface structures as described above.
Referring to FIG. 3(C), a registered and integrated image 166 as a result of the registration process can be seen. The segmented three dimensional images 162 have been morphed with respect to the surface two dimensional image 164 to align the identified subsurface structures 156, 158, 160 in order to render registered image 166. Such a process is iterative ly performed to produce an integrated and registered video. Such images allow a surgeon to see the surface imaging and the three dimensional segmented model at the same time on the display unit 24 to accurately guide interventional procedures (and other procedures). In particular, it may allow a surgeon to avoid non- superficial blood vessels in making incisions that are too deep to visualize from just the endoscopic imaging data (even if enhanced as described above).
In FIGS. 3(D) and 3(E), images 168, 170 relevant to elastic registration are shown. In FIG. 3(D), a two dimensional image 169 from the endoscopic camera 16 can be seen in which deformation has occurred. Such deformation may be as a result of
manipulation by a surgical tool, insufflation, breathing, etc. As can be seen, a surface of the organ 150 has been stretched in the y direction as compared to the endoscopic image 164 in FIG. 3(B). The elastic registration process iteratively transforms the three dimensional imaging data to the deformed two dimensional images 168 by aligning the corresponding sub-surface structures 156, 158, 160. Such deformation is tracked in the registered and integrated image 170 shown in FIG. 3(E). From FIG. 3(E), it can be seen that the tumor 152 and the blood vessels 154 shown in the three dimensional part of the image 170 have been deformed to match the deformation shown by the two dimensional image 168. Thus, the surgeon is guided to take into account the deformed position of the tumor 152 and blood vessels 154 when carrying out a procedure.
The segmentation step 100 results in soft tissue differentiation to identify tumor and tumor-like pathologies 152, as shown in FIG. 3, that cannot be observed by superficial surface imaging as performed by the endoscope 12 (e.g. a laparoscope 12).
Deformation of the target organ 150 (e.g. liver, kidney, esophagus, etc.) and soft tissue structure, which can happen during surgical instrument-organ interaction, is observed by the endoscope 12 as shown in FIG. 3(D). In order to register the tomographic images taken by the imaging machine 36 (which may be pre -operative images) to the endoscope images including the deformation, the tomographic images are deformed to match the sub-surface structures 156, 158, 160 in the two imaging modalities. In particular, superficial structures such as the blood vessels 156, 158, 160 can be used for elastic registration, as described above with respect to step 104. In this way, it is possible, without having to make new three dimensional imaging with the imaging machine 36, to predict the deformed organ 150 geometry from the endoscopic imaging. For instance, the tumor 152 may now be deformed and displaced in the registered images as shown by comparing the image 166 of FIG. 3(E) with the image 170 of FIG. 3(C).
Turning back to the flowchart of FIG. 2, a step 106 of displaying registered and integrated tomographic and endoscopic imaging data is provided. Such a display is provided on the display unit 24 and generated by the display generation module 40. For example, a video overlay of the elastically deformed and registered tomographic images and the endoscopic surfaces images may be provided. The two dimensional images from the endoscopic camera 16 may be presented as partly see-through to allow an overlay of the three dimensional model obtained in step 102 to be viewed to give an impression of depth and to be able to see deeper than allowed by the endoscopic imaging. In another example, a video of registered images may be displayed as shown in FIGS. 3(C) and 3(E). According to step 106, it is possible to display an integrated endoscopic image showing the superficial vessels from the optical, endoscopic images (either as a result of spectral imaging as described above and/or segmentation according to step 102) as well as the tissue behind the surface layer as provided by the tomographic imaging data to give an indication of upcoming structures when advancing the endoscope 12.
An exemplary registration technique to be used in step 104 is described at a general level with reference to the flow chart of FIG. 4 and the schematic data transformation illustrations shown in FIG. 5. In step 200, the tomographic imaging data is received, which may be preoperative imaging data. Such imaging data 250 is schematically illustrated in FIG. 5(a), which shows an organ 150.
In step 202, a position of the endoscope relative to the tomographic imaging data is estimated based on tracking data from the video tracking system 35. From this, a field of view within the tomographic imaging data of the endoscopic camera 16 can be estimated including a depth along a z axis (which coincides with a longitudinal axis of the instruction tube 14 of the endoscope 12), an area of the field of view, an angle of an xy plane being viewed, etc.. Such relative positioning of the endoscope 12 is schematically illustrated in the imaging data 252 represent in FIG. 5 (b). Also shown in FIG. 5(b) is a possibly deformed organ 150', which can be viewed by the endoscopic camera 16, but which has yet to be transformed to the tomographic imaging data.
In step 204, a search space is initialized based on the estimated field of view. Thus, the search space may include a range of x, y and z coordinates to be searched in the tomographic imaging data and a coherent bending range to be searched, in order to provide constraints to the search space. The z direction is a depth direction, whereas x and y directions are perpendicular to each other in a plane perpendicular to the z direction.
In step 206, an image slab obtained from the endoscopic camera 16 is received. Referring to FIG. 5(c), a representation 256 is provided of the image slab 253 seated into the tomographic imaging data at a most likely position based on the estimated field of view from step 202.
In step 208, the image slab 253 is shifted along the x, y and z axis iteratively within the search space determined in step 204. At each iteration, a similarity measure is determined based on calculated alignment of the sub-surface structures. The iterations are performed with the aim of minimizing the similarity measure to determine an optimally registered image slab 253 in the tomographic imaging data along the x, y and z directions assuming a rigid image slab 253. Referring to FIG. 5(d), an image 256 of the rigid x, y and z shifting of the image slab 253 within the search space is represented.
In step 210, the image slab 253 is no longer treated rigidly and is iteratively bent within the determined bend range of the search space from step 204. The bending is performed with a view to maintaining a coherent image slab, without discontinuities and within reasonable limits for a surface of body tissue e.g. an organ. Such limits can be predetermined from experimentation. The similarity measure is determined at each iteration and the search continues within the search space to minimize the similarity measure. Referring to FIG. 5(e), a representation 258 of bending of the image slab 253 within the tomographic imaging data is shown.
In step 212, the similarity measure has been minimized in steps 208 and 210 to obtain an elastic registration of the two dimensional endoscopic imaging data and the three dimensional tomographic imaging data. Such an elastic registration provides a registration transformation between the two imaging modalities so that a display can be generated of the two registered and integrated imaging modalities. Such registered imaging data is shown in the image 260 of FIG. 5(f).
In another exemplary embodiment of the present invention, a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate processing system.
The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention. This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus. The computing unit can be adapted to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method of the invention.
This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
Further on, the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, is presented wherein the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section.
A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems. However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to the device type claims. However, a person skilled in the art will gather from the above and the following description that, unless otherwise notified, in addition to any combination of features belonging to one type of subject matter also any combination between features relating to different subject matters is considered to be disclosed with this application.
However, all features can be combined providing synergetic effects that are more than the simple summation of the features.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the disclosure as set forth in the appended claims and the legal equivalents thereof.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items re-cited in the claims. The mere fact that certain measures are re-cited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. An image processing system for registering tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest, the system comprising:
a data interface unit adapted to receive tomographic imaging data and endoscopic imaging data;
at least one processor adapted to:
perform a segmentation processing technique on the tomographic imaging data to identify sub-surface structures;
identify sub-surface structures in the endoscopic imaging data corresponding to the sub-surface structures in the tomographic imaging data, wherein the at least one processor is adapted to perform a segmentation processing technique on the endoscopic imaging data to identify the sub-surface structures in the endoscopic imaging data;
register the tomographic imaging data and the endoscopic imaging data based on registration of the sub-surface structures identified in the endoscopic imaging data and the sub-surface structures identified from segmentation of the tomographic imaging data, and wherein the at least one processor is adapted to register the tomographic imaging data and the endoscopic imaging data using an elastic registration process. .
2. The image processing system of claim 1, wherein the at least one processor is configured to perform the elastic registration process by including processing steps of iteratively shifting along x, y and z axes the endoscopic imaging data with respect to the tomographic imaging data to minimize a measure of registration error determined based on registration of the sub-surface structures in the tomographic imaging data and the
corresponding structures in the endoscopic imaging data.
3. The image processing system of claim 1 or 2, wherein the at least one processor is configured to register the tomographic imaging data and the endoscopic imaging data by rigidly shifting the endoscopic imaging data with respect to the tomographic imaging data to minimize registration error between the sub-surface structures identified in the endoscopic imaging data and the sub-surface structures identified in the tomographic imaging data and to thereafter bend the endoscopic imaging data with respect to the tomographic imaging data to minimize registration error between the sub-surface structures identified in the endoscopic imaging data and the sub-surface structures identified in the tomographic imaging data.
4. The image processing system of claim 1, 2 or 3, wherein the endoscopic imaging data is obtained from a spectral, multispectral, hyperspectral or thermographic imaging device.
5. The image processing system of any preceding claim, wherein the
tomographic imaging data is contrast enhanced.
6. The image processing system of any preceding claim, wherein the at least one processor is adapted to identify blood vessels as the sub-surface structures of the tomographic imaging data and the structures of the endoscopic imaging data.
7. The image processing system of claim 5, wherein the at least one processor is adapted to initialize the elastic registration process based on tracking data from a tracking system for tracking a position of the endoscope.
8. The image processing system of any preceding claim, comprising a display generation module configured to generate an integrated and registered display of endoscopic images based on the endoscopic imaging data and images based on the tomographic imaging data.
9. An imaging system, comprising:
the image processing system of any preceding claim;
and at least one of:
a tomographic imaging machine for obtaining the tomographic imaging data; and
an endoscope for obtaining the endoscopic imaging data.
10. The imaging system of claim 9, comprising a tracking system for tracking a position of the endoscope.
11. The imaging system of claim 10, wherein the tracking system comprises at least one video camera associated with the imaging machine.
12. A computer implemented method for registering tomographic imaging of a region of interest of a subject and endoscopic imaging including a surface of the region of interest, the method comprising:
- receiving tomographic imaging data and endoscopic imaging data;
performing a segmentation processing technique on the tomographic imaging data to identify sub-surface structures;
perform a segmentation processing technique on the endoscopic imaging data to identify sub-surface structures in the endoscopic imaging data corresponding to the sub- surface structures in the tomographic imaging data; and
register the tomographic imaging data and the endoscopic imaging data based on registration of the sub-surface structures identified in the endoscopic imaging data and the sub-surface structures identified from segmentation of the tomographic imaging data using an elastic registration process.
13. The computer implemented method of claim 12, comprising generating an integrated and registered display of endoscopic images based on the endoscopic imaging data and images based on the tomographic imaging data.
14. A computer program element adapted to implement an image processing system according to any one of the claims 1 to 11 or adapted to perform the method steps of claim 12 or 13 when executed by at least one processor.
15. A computer readable medium having stored the computer program element of claim 14.
PCT/EP2017/066360 2016-06-30 2017-06-30 Registering tomographic imaging and endoscopic imaging WO2018002347A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16177312.2 2016-06-30
EP16177312 2016-06-30

Publications (1)

Publication Number Publication Date
WO2018002347A1 true WO2018002347A1 (en) 2018-01-04

Family

ID=56296698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/066360 WO2018002347A1 (en) 2016-06-30 2017-06-30 Registering tomographic imaging and endoscopic imaging

Country Status (1)

Country Link
WO (1) WO2018002347A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190247142A1 (en) * 2018-02-15 2019-08-15 Leica Instruments (Singapore) Pte. Ltd. Image processing method and apparatus using elastic mapping of vascular plexus structures
US20190282099A1 (en) * 2018-03-16 2019-09-19 Leica Instruments (Singapore) Pte. Ltd. Augmented reality surgical microscope and microscopy method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013093761A2 (en) * 2011-12-21 2013-06-27 Koninklijke Philips Electronics N.V. Overlay and motion compensation of structures from volumetric modalities onto video of an uncalibrated endoscope
US20140241600A1 (en) 2013-02-25 2014-08-28 Siemens Aktiengesellschaft Combined surface reconstruction and registration for laparoscopic surgery
WO2015135058A1 (en) * 2014-03-14 2015-09-17 Synaptive Medical (Barbados) Inc. Methods and systems for intraoperatively confirming location of tissue structures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013093761A2 (en) * 2011-12-21 2013-06-27 Koninklijke Philips Electronics N.V. Overlay and motion compensation of structures from volumetric modalities onto video of an uncalibrated endoscope
US20140241600A1 (en) 2013-02-25 2014-08-28 Siemens Aktiengesellschaft Combined surface reconstruction and registration for laparoscopic surgery
WO2015135058A1 (en) * 2014-03-14 2015-09-17 Synaptive Medical (Barbados) Inc. Methods and systems for intraoperatively confirming location of tissue structures

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
EDUARD SERRADELL ET AL: "Robust non-rigid registration of 2D and 3D graphs", COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012 IEEE CONFERENCE ON, IEEE, 16 June 2012 (2012-06-16), pages 996 - 1003, XP032232175, ISBN: 978-1-4673-1226-4, DOI: 10.1109/CVPR.2012.6247776 *
GROHER M ET AL: "Deformable 2D-3D Registration of Vascular Structures in a One View Scenario", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 28, no. 6, 1 June 2009 (2009-06-01), pages 847 - 860, XP011249669, ISSN: 0278-0062 *
JIANG JUE ET AL: "Marker-less tracking of brain surface deformations by non-rigid registration integrating surface and vessel/sulci features", INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, SPRINGER, DE, vol. 11, no. 9, 5 March 2016 (2016-03-05), pages 1687 - 1701, XP036053081, ISSN: 1861-6410, [retrieved on 20160305], DOI: 10.1007/S11548-016-1358-7 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190247142A1 (en) * 2018-02-15 2019-08-15 Leica Instruments (Singapore) Pte. Ltd. Image processing method and apparatus using elastic mapping of vascular plexus structures
CN110164528A (en) * 2018-02-15 2019-08-23 徕卡仪器(新加坡)有限公司 Utilize the image processing method and device of the elasticity mapping of blood vessel plex structure
CN110164528B (en) * 2018-02-15 2023-12-29 徕卡仪器(新加坡)有限公司 Image processing method and device using elastic mapping of vascular plexus structure
US20190282099A1 (en) * 2018-03-16 2019-09-19 Leica Instruments (Singapore) Pte. Ltd. Augmented reality surgical microscope and microscopy method
US11800980B2 (en) * 2018-03-16 2023-10-31 Leica Instruments (Singapore) Pte. Ltd. Augmented reality surgical microscope and microscopy method

Similar Documents

Publication Publication Date Title
US20200305985A1 (en) Apparatus and methods for use with skeletal procedures
US11883118B2 (en) Using augmented reality in surgical navigation
US11730562B2 (en) Systems and methods for imaging a patient
US9925017B2 (en) Medical navigation image output comprising virtual primary images and actual secondary images
CN107072736B (en) Computed tomography enhanced fluoroscopy systems, devices, and methods of use thereof
CN109219384B (en) Image-based fusion of endoscopic images and ultrasound images
JP6972163B2 (en) Virtual shadows that enhance depth perception
CA3029348C (en) Intraoperative medical imaging method and system
US20150031990A1 (en) Photoacoustic tracking and registration in interventional ultrasound
US20110105895A1 (en) Guided surgery
KR20130015146A (en) Method and apparatus for processing medical image, robotic surgery system using image guidance
EP3690810B1 (en) Method for displaying tumor location within endoscopic images
US20210052240A1 (en) Systems and methods of fluoro-ct imaging for initial registration
JP6745998B2 (en) System that provides images to guide surgery
Nagelhus Hernes et al. Computer‐assisted 3D ultrasound‐guided neurosurgery: technological contributions, including multimodal registration and advanced display, demonstrating future perspectives
WO2018002347A1 (en) Registering tomographic imaging and endoscopic imaging
Fusaglia et al. A clinically applicable laser-based image-guided system for laparoscopic liver procedures
JP6215963B2 (en) Navigation using pre-acquired images
JP2011024913A (en) Medical image processor, medical image processing program, and x-ray ct apparatus
JP2023064078A (en) System and method for image registration
WO2018109227A1 (en) System providing images guiding surgery
Shahin et al. Ultrasound-based tumor movement compensation during navigated laparoscopic liver interventions
JP2022541887A (en) Instrument navigation in endoscopic surgery during obscured vision
EP3788981B1 (en) Systems for providing surgical guidance
Allain et al. Biopsy site re-localisation based on the computation of epipolar lines from two previous endoscopic images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17739510

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17739510

Country of ref document: EP

Kind code of ref document: A1