WO2020174284A1 - Méthodes et systèmes de compensation de mouvement pendant une reconstruction tridimensionnelle de structures anatomiques - Google Patents

Méthodes et systèmes de compensation de mouvement pendant une reconstruction tridimensionnelle de structures anatomiques Download PDF

Info

Publication number
WO2020174284A1
WO2020174284A1 PCT/IB2020/000173 IB2020000173W WO2020174284A1 WO 2020174284 A1 WO2020174284 A1 WO 2020174284A1 IB 2020000173 W IB2020000173 W IB 2020000173W WO 2020174284 A1 WO2020174284 A1 WO 2020174284A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
movement
intraoperative
intraoperative images
area
Prior art date
Application number
PCT/IB2020/000173
Other languages
English (en)
Inventor
Tal Tzeisler
Yoel CHAIUTIN
Eran HARPAZ
Dima SEZGANOV
Dorian Averbuch
Original Assignee
Body Vision Medical Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Body Vision Medical Ltd. filed Critical Body Vision Medical Ltd.
Publication of WO2020174284A1 publication Critical patent/WO2020174284A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/485Diagnostic techniques involving fluorescence X-ray imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • A61B6/5264Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise due to motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10121Fluoroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10124Digitally reconstructed radiograph [DRR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the exemplary embodiments relate to methods and systems for generating three- dimensional models of anatomical structures based on two-dimensional images, and, more particularly compensating for movement of the anatomical structures during the imaging process.
  • a method includes obtaining a pre-operative fluoroscopic image of a region of interest; extracting at least one anatomical structure from the pre-operative fluoroscopic image; recording a plurality of intraoperative fluoroscopic images of the region of interest, wherein the region of interest is moving during the recording step; extracting the at least one anatomical structure from each of the plurality of intraoperative fluoroscopic images; estimating a pose of the intraoperative imaging device for each of the intraoperative fluoroscopic images; correcting each of the intraoperative fluoroscopic images for movement of the region of interest; and reconstructing a three-dimensional space based on the plurality of intraoperative fluoroscopic images.
  • a method includes obtaining at least two intraoperative images of an area of interest in a body of a patient while the area of interest is moving, wherein at least a first one of the at least two intraoperative images shows the area of interest in a first position, and wherein at least a second one of the at least two intraoperative images shows the area of interest in a second position that is different than the first position; estimating a three-dimensional pose of each of the at least two intraoperative images; performing a correction to compensate for movement of the area of interest between the first position and the second image; and reconstructing a three-dimensional volume of the area of interest based on the correction.
  • the step of performing the correction to compensate for movement of the area of interest includes estimating a three-dimensional pose of at least one marker on a medical instrument shown the first one of the at least two intraoperative images and in the second one of the at least two intraoperative images; and correcting a camera pose of each of the at least two intraoperative images to compensate for the movement of the area of interest; wherein the reconstmction of the three-dimensional volume is performed based on the corrected camera poses.
  • the step of estimating the three-dimensional pose of the at least one marker includes comparing a projected shape of the medical instrument on one of the at least two fluoroscopic images to known anatomical structure of a body cavity in which the medical instrument is positioned; and determining an assumption of the constrained location of the medical image within the anatomical structure.
  • the step of performing the correction to compensate for movement of the area of interest includes estimating two-dimensional tissue movement between the first one of the at least two intraoperative images and the second one of the at least two intraoperative images; and compensating for the two-dimensional tissue movement by applying warping to the first one of the at least two intraoperative images and the second one of the at least two intraoperative images based on tissue movement vectors corresponding to the two-dimensional tissue movement; wherein the reconstruction of the three-dimensional volume is performed based on the warped images.
  • the two-dimensional tissue movement is estimated using an optical flow method.
  • the step of performing the correction to compensate for movement of the area of interest includes estimating two-dimensional tissue movement between the first one of the at least two intraoperative images and the second one of the at least two intraoperative images; calculating an average tissue movement vector between the first one of the at least two intraoperative images and the second one of the at least two intraoperative images; and correcting a camera pose for each of the at least two intraoperative images to compensate for the average tissue movement; wherein the reconstruction of the three-dimensional volume is performed based on the corrected camera poses.
  • the two-dimensional tissue movement is estimated using an optical flow method.
  • the step of performing the correction to compensate for movement of the area of interest includes estimating two-dimensional tissue movement between the first one of the at least two intraoperative images and the second one of the at least two intraoperative images; calculating an average tissue movement vector between the first one of the at least two intraoperative images and the second one of the at least two intraoperative images; correcting a camera pose for each of the at least two intraoperative images to compensate for the average tissue movement; and compensating for the two-dimensional tissue movement by applying warping to the first one of the at least two intraoperative images and the second one of the at least two intraoperative images based on tissue movement vectors corresponding to the two-dimensional tissue movement.
  • the reconstruction of the three-dimensional volume is performed based on the compensation for the two-dimensional tissue movement.
  • the step of performing the correction to compensate for movement of the area of interest includes estimating a pose of each of the at least two intraoperative images; identifying at least one landmark on each of at least two intraoperative images; grouping the at least two intraoperative images into at least two groups, wherein each of the at least two groups corresponds to a different movement phase; reconstructing at least two volumes of the area of interest, each of the at least two volumes of the area of interest corresponding to a corresponding one of the at least two groups; and merging the at least two volumes of the area of interest.
  • the at least two intraoperative images are grouped into the at least two groups based on closeness of the landmarks to epipolar lines.
  • the at least two intraoperative images are grouped into the at least two groups using a k-means algorithm.
  • the step of performing the correction to compensate for movement of the area of interest includes obtaining a preoperative image of the area of interest; estimating a pose for each of the at least two intraoperative images; generating at least two digitally reconstructed radiograph images, wherein each of the at least two digitally reconstructed radiograph images corresponds to a corresponding one the at least two intraoperative images, wherein each of the digitally reconstructed radiograph images is generated based on (a) the preoperative image and (b) the pose of the corresponding one of the at least two intraoperative images; calculating at least two transforms, wherein each of the at least two transforms corresponds to a corresponding one of the at least two intraoperative images and to the one of the at least two digitally reconstructed radiograph images that corresponds to the corresponding one of the at least two intraoperative images; and compensating for movement in each of the intraoperative
  • the step of performing the correction to compensate for movement of the area of interest includes estimating a pose for each of the at least two intraoperative images; and generating at least one virtual image for a pose that is not a pose of one of the at least two intraoperative images; wherein the reconstruction of the three-dimensional volume is performed based on the at least two intraoperative images and the at least one virtual image.
  • the virtual image is generated using a machine learning algorithm.
  • the machine learning algorithm includes a generative adversarial network.
  • the movement of the area of interest is due to respiration of the patient.
  • the method also includes registering the three-dimensional volume to a preoperative volume.
  • Figure 1 shows a first exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • Figure 2 shows a second exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • Figure 3 shows a third exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • Figure 4 shows a fourth exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • Figure 5 shows a fifth exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • Figure 6 shows a sixth exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • Figure 7 shows an exemplary method for estimating local movement by using radio-opaque instrument markers.
  • Figure 8 shows representative tissue movement between adjacent frames.
  • Figure 9A shows a first exemplary method for tracking movement of tissue and anatomical landmarks.
  • Figure 9B shows a second exemplary method for tracking movement of tissue and anatomical landmarks.
  • Figure 10 shows images of lungs during representative movement phases.
  • Figure 11 shows an exemplary method for grouping images having close geometric configuration to one another.
  • Figure 12 shows an exemplary method for correlation of intraoperative images to preoperative imaging.
  • Figure 13 shows an exemplary method for completing missing information due to limited imaging angles.
  • a machine-readable medium may include any medium and/or mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device).
  • a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others.
  • a non-transitory article such as a non-transitory computer readable medium, may be used with any of the examples mentioned above or other examples except that it does not include a transitory signal per se. It does include those elements other than a signal per se that may hold data temporarily in a“transitory” fashion such as RAM and so forth.
  • the terms“computer engine” and“engine” identify at least one software component and/or a combination of at least one software component and at least one hardware component which are designed/programmed/configured to manage/control other software and/or hardware components (such as the libraries, software development kits (SDKs), objects, etc.).
  • Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.
  • the one or more processors may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors; x86 instruction set compatible processors, multi core, or any other microprocessor or central processing unit (CPU).
  • CISC Complex Instruction Set Computer
  • RISC Reduced Instruction Set Computer
  • the one or more processors may be dual-core processor(s), dual-core mobile processor(s), and so forth.
  • Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
  • One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein.
  • Such representations known as“IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that make the logic or processor.
  • the tomography reconstruction from intraoperative images can be used for calculating the target (region of interest, including a moving target) position relatively to the reference coordinate system.
  • a reference coordinate system can be defined by a jig with radiopaque markers with known geometry (e.g., an object with a fixed and known pattern), allowing to calculate a relative pose of each intraoperative image. Since each input image of the tomographic reconstructions has known geometric relationship to a reference coordinate system, the position of the target is also can be located in the reference coordinate system. This allows to project a target on any intraoperative image.
  • the projected target position can be compensated for respiratory movement by tracking tissue in the region of the moving target.
  • at least some aspects of the reconstruction are performed in accordance with the exemplary methods described in International Patent
  • the tomography reconstructed volume can be registered to the preoperative CT volume. Given the known position of the center of the target, or anatomical structures adjunctive to the target, such as blood vessels or bronchial airways, in the reconstructed volume and in the preoperative volume, both volumes can be initially aligned. In other embodiments, ribs extracted from both volumes can be used to find the initial alignment (coarse registration). To find the correct rotation between the volumes the reconstructed position and trajectory of the instrument can be matched to all possible airway trajectories extracted from the CT. The best match will define the most optimal relative rotation between the volumes.
  • Figure 1 shows a first exemplar ⁇ ' method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • a preoperative image is received.
  • a target is marked on the preoperative image.
  • a 3D map of airways shown in the preoperative image is extracted.
  • at least one intraoperative reconstructed image is received.
  • a target is marked on at least one of the at least one intraoperative reconstructed image.
  • a 3D instrument trajectory is extracted from at least one of the at least one intraoperative reconstructed image.
  • the volumes of the preoperative image and the at least one intraoperative reconstructed image are initially aligned based on the respective targets.
  • step 180 a best transformation is identified such that the targets match and such that the extracted 3D instalment trajectory matches the extracted 3D map of the airways.
  • matching is performed as described in paragraph [0053] of International Patent Application Publication No WO/2020/035730, the contents of which are incorporated herein by reference in their entirety, and which describes a process including (1 ) extracting all possible 3D trajectories from the 3D map of airways connecting a trachea and every leaf segment of a bronchial tree; (2) matching ever ⁇ ' extracted trajectory to the extracted 3D instrument trajectory.
  • Figure 2 shows a second exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume in step 210, a preoperative image is received.
  • a target is marked on the preoperative image.
  • a 3D map of airways shown in the preoperative image is extracted.
  • at least one intraoperative image is received.
  • at least one intraoperative reconstructed image is received.
  • a 2D instrument trajectory is extracted from at least one of the at least one intraoperative image.
  • a target is marked on at least one of the at least one intraoperative reconstructed image.
  • step 280 the volumes of the preoperative image and the at least one intraoperative reconstructed image are initially aligned based on the respective targets in step 290, a best transformation is identified such that the targets match and such that the extracted 2D instrument trajectory matches the extracted 3D map of the airways as projected into the image plane of the 2D image.
  • the resulting transformation transforms a preoperative image coordinate system to the intraoperative reconstructed image coordinate system.
  • the position of every intraoperative fluoroscopic image is known relatively to the reconstructed image coordinate system. Therefore, in some embodiments, any 3D object from the preoperative image can be transformed to the reconstructed image coordinate system and then projected onto any fluoroscopic frame.
  • a projected 3D object from the preoperative image can be compared to a matching 2D object extracted from the fluoroscopic image, allowing calculating a quality (e.g., cost) of the fit.
  • the computed cost can be minimized using optimization methods such as gradient descent in order to find the best transformation.
  • Figure 3 shows a second exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • a preoperative image is received.
  • a target is marked on the preoperative image.
  • a 3D map of ribs shown in the preoperative image is extracted.
  • at least one intraoperative image is received.
  • at least one intraoperative reconstructed image is received.
  • a 2D map of the ribs is extracted from at least one of the at least one intraoperative image.
  • a target is marked on at least one of the at least one intraoperative reconstructed image.
  • step 380 the volumes of the preoperative image and the at least one intraoperative reconstructed image are initially aligned based on the respective targets.
  • step 390 a best transformation is identified such that the targets match and such that the extracted 2D map of the ribs best matches the extracted 3D map of the ribs as projected into the image plane of the 2D image.
  • a best transformation is identified in a manner such as that described above with reference to step 290 of the method shown in Figure 2.
  • Figure 4 shows a fourth exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • a preoperative image is received.
  • a target is marked on the preoperative image.
  • a 3D map of ribs shown in the preoperative image is extracted.
  • at least one intraoperative reconstructed image is received.
  • a target is marked on at least one of the at least one intraoperative reconstructed image.
  • a 3D map of ribs is extracted from at least one of the at least one intraoperative reconstructed image.
  • a best transformation is identified such that the targets match and such that the extracted 3D maps of the ri bs match one another in some embodiments, a best transformation is defined as a transformation that transforms the targets and the 3D objects (e.g., ribs) from one image’s coordinate system into another image’s coordinate system such that the Euclidean distance between the targets and between the 3D objects is minimized.
  • 3D objects e.g., ribs
  • Figure 5 shows a fifth exemplary method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • a preoperative image is received.
  • 3D landmarks are extracted from the preoperative image.
  • at least one intraoperative reconstructed image is received.
  • a 3D landmarks are extracted from at least one of the at least one intraoperative reconstructed image.
  • a best transformation is identified such that the 3D landmarks match one another.
  • a best transformation is defined as a transformation that transforms the 3D objects (e.g., landmarks) from one image’s coordinate system into another image’s coordinate system such that the Euclidean distance between the 3D objects is minimized.
  • Figure 6 show's a sixth exemplary' method for registration of a preoperative volume to a reconstructed intraoperative volume.
  • a preoperative image is received.
  • at least one intraoperative reconstructed image is received.
  • a best transformation is identified that best aligns both volumes using a defined cost function.
  • a cost function is based on mutual information correlation.
  • a cost function is based on 3D image correlation.
  • only partial information can be reconstructed from the intraoperative based tomography because of the limited quality of fluoroscopic imaging, limited available imaging angles, obstruction of the area of interest by other tissue, and/or space limitations of the operational environment.
  • the partial 3D volume reconstructed from intraoperative imaging and preoperative CT can be registered through correspondent paired features.
  • the nonlimiting example is the mutual information similarity-based registration method. Then the two image sources can be fused together to form unified data set.
  • the abovementioned dataset can be updated from time to time with additional intra procedure images.
  • exemplary' methods compensate for target movement through one of the following: (1) estimating local movement by instrument markers; (2) tracking tissue and anatomical landmarks; (3) grouping images having close geometric configuration (such as same position of respiratory cycle) while using all the available images; (4) correlating with preoperative imaging, and/or (5) estimating pose for each image without use of external jig.
  • both static (fixed) markers e.g., a jig positioned under the patient
  • dynamic markers inside the lungs e.g., a radiopaque instrument
  • local 3D movement is estimated by using radio-opaque markers located on an instalment that is positioned within the body.
  • Figure 7 shows an exemplar ⁇ ' method for estimating local 3D movement by using radio-opaque instrument markers.
  • instrument markers are close to the region of interest and move similarly (e.g., that the instrument is positioned within an airway of interest).
  • intraoperative images showing the instrument within the body are acquired.
  • the 3D pose of each marker e.g., the 3D location of each marker in the reference coordinate system
  • the 3D location of each marker in the reference coordinate system is estimated as described in paragraph [000122] of International Patent Application Publication No. WO/2015/101948, the contents of which are incorporated herein by reference in their entirety, and which teaches that“a depth of at least one section of the instrument is calculated by (1 ) comparison of (a) the projected instrument shape on fluoroscopic image with (b) the known anatomical structure of the bronchial airway and (2) making an assumption of constrained instrument location inside the bronchial tree ( Figure 13)”.
  • this allows reconstruction the trajectory of a marker during the recording of 2D images.
  • the camera poses for each image are corrected to compensate for movement.
  • correction is performed by calculating a 3D transformation that represents the movement of the instrument for each image and correcting the calculated camera pose in a way that will compensate for that movement.
  • the reference coordinate system is defined by the coordinate system of a jig having a 3D or 2D pattern with markers (e.g., radiopaque spheres, radiopaque lines etc.).
  • boundary conditions are required for solving for the 3D location of a marker.
  • landmarks that were captured from preoperative CT imaging can assist to solve ambiguities, boundary conditions, and reconstruct the 3D location of the instrument and markers.
  • solving is performed as described in paragraph [000122] of International Patent Application Publication No. WO/2015/101948, the contents of which are incorporated herein by reference in their entirety, and which teaches“making an assumption of constrained instrument location inside the bronchial tree ( Figure 13).”
  • reconstruction is performed using the corrected camera poses determined in step 730.
  • Figure 8 shows tissue movement of representative areas between two adjacent images within a frame 800.
  • three objects are shown at initial positions 810, 812, 814 in a first image. Due to respiration, in a second image in sequence, the same three objects are shown at subsequent positions 820, 822, 824.
  • the arrows 830, 832, 834 denote the magnitude and direction of the movement of the corresponding objects.
  • Figures 9A and 9B show flowcharts of first and second methods, respectively, for tracking movement of tissue and anatomical landmarks. In some embodiments, either of the methods of Figures 9A and 9B can be used individually.
  • step 910 intraoperative images are acquired.
  • a sequence of images is recorded from various positions of the fluoroscopic device according to an initial plan or an operator preference.
  • the trajectory of the fluoroscopic device is extracted using a 3D model that is placed in the region of interest.
  • each pair of two adjacent images is processed to estimate the geometric translation vector of each pixel.
  • processing is performed by using a method such as optical flow, as described in paragraph [000115] of International Patent Application Publication No.
  • the pixels on a fluoroscopic screen are (1) classified by density range, (2) tracked through the live fluoroscopic video, and (3) classified by movement.”
  • two-dimensional tissue movement between adjacent frames is estimated.
  • the two-dimensional tissue movement vector is estimated for every' pixel by subtracting the vector of the camera motion from the tracked geometric translation at that pixel.
  • tissue movement is compensated for. In some embodiments, the tissue movement is compensated for by manipulating the images and updating the trajectory.
  • t e compensation for tissue movement is be performed by applying warping on each image in the opposite direction and magnitude of the estimated tissue movement.
  • a warping process deforms images by displacing every pixel in the opposite direction of the estimated tissue movement and by the same magnitude as the estimated tissue movement in step 940, reconstruction is performed using the corrected images produced by steps 920 and 930.
  • step 950 intraoperative images are acquired.
  • a sequence of images is recorded from various positions of the fluoroscopic device according an initial plan or an operator preference.
  • the trajectory of the fluoroscopic device is extracted using a 3D model that is placed in the region of interest.
  • each pair of two adjacent images is processed to estimate the geometric translation vector of each pixel.
  • processing is performed by using a method such as optical flow, as described in paragraph [000115] of International Patent Application Publication No. WO/2015/101948, the contents of which are incorporated by reference herein in their entirety, and which teaches“tracking of visible tissues using optical flow on fluoroscopic video.
  • the pixels on a fluoroscopic screen are (1) classified by density range, (2) tracked through the live fluoroscopic video, and (3) classified by movement.”
  • step 960 two- dimensional tissue movement between adjacent frames is estimated.
  • the two-dimensional tissue movement vector is estimated for every pixel by subtracting the vector of the camera motion from the tracked geometric translation at that pixel.
  • step 970 the average tissue movement vector is calculated for each frame through the entire image. In some embodiments, given the tissue movement vector per pixel, the average tissue movement is estimated by averaging all separate pixel movement vectors.
  • the camera pose for each image is updated to compensate for average tissue movement.
  • updating the camera pose to compensate for average tissue movement includes displacing the camera pose in 3D in a plane, parallel to the image plane, such that the image origin moves in the opposite direction of the estimated average tissue movement.
  • reconstruction is performed with the corrected/updated camera poses plane.
  • the exemplary embodiments of Figures 9A and 9B result in a sharper reconstructed 3D image that allows pinpointing the target with high precision.
  • tissue that undergoes repetitive movement will produce images having portions with similar geometric configuration to one another
  • an exemplary method includes clustering similar phases with one another by assigning each image to one of a plurality of phases.
  • Figure 10 illustrates exemplary phases of the same tissue that may be clustered with one another, namely a first phase 1010, a second phase 1020, and a third phase 1030.
  • each marker located in one image defines an epipolar line in all other images, on which the same marker should be located in other images if there were no movement at all.
  • it can be assumed the movement is repetitive according to breathing phases.
  • Figure 11 shows an exemplary method for grouping images having similar geometric configuration to one another.
  • step 1110 intraoperative images are received.
  • step 1120 the pose of each intraoperative image is estimated.
  • step 1130 landmarks are identified in each intraoperative image.
  • images are grouped by movement phase (e.g., breathing phase).
  • breathing phase e.g., breathing phase
  • every subset of the images that are in a similar breathing phase with one another are grouped together by evaluating the fit of each marker to the epipolar lines defined by the same marker in other images within the group.
  • the algorithm clusters images by breathing phase.
  • breathing phase can be characterized by the distance (closeness) of the markers to the epipolar lines.
  • clustering is performed by a k-means algorithm.
  • step 1150 the reconstruction is performed on each breathing phase group (cluster) separately.
  • step 1160 multiple reconstructions (correspondent to different breathing phase groups) are fused (e.g., merged) with one another, allowing the spatial transformation relatively to reference coordinate system, including translation, rotation and scaling (since breathing causes inflation/deflation of the lung).
  • fusing may be performed by averaging of correspondent voxel values between reconstruction images after the said transformation.
  • a 3D transform between a preoperative imaging modality (e.g., CT imaging) and the jig is known. Accordingly, in some embodiments, for each intraoperative image, the camera position and orientation is known relatively to the jig. Therefore, in some embodiments, the expected virtual intraoperative image according to the preoperative CT (e.g., a digitally reconstructed radiograph) can be calculated. In some embodiments, calculation is performed as described in International Patent Application Publication No. WO/2015/101948, the contents of which are incorporated herein by reference in their entirety, and which describes a digitally reconstructed radiograph (“DRR”) method of generating virtual fluoroscopic images. In some embodiments, the obtained intraoperative image is compared and best fitted (e.g., by a 2D transform) to the expected virtual intraoperative image, thus compensating on movements
  • a preoperative imaging modality e.g., CT imaging
  • the camera position and orientation is known relatively to the jig. Therefore, in some embodiments, the expected virtual intraoperative
  • FIG. 12 illustrates an exemplary method for correlation of intraoperative images with preoperative imaging.
  • intraoperative images are acquired.
  • the pose is estimated for each intraoperative image.
  • a DRR image is generated for each intraoperative image from the preoperative image based on the pose of the intraoperative image.
  • the DRR image is described in accordance with the techniques described in International Patent Application Publication No. WO/2015/101948, the contents of which are incorporated herein by reference in their entirety.
  • a transform between the intraoperative image and the generated DRR image is calculated.
  • the generated DRR image is compared to the corresponding intraoperative image by cross-correlation and the best 2D shift between the generated DRR image and the intraoperative image can be estimated; the best 2D shift represents the tissue movement.
  • movement compensation for the intraoperative image is performed.
  • the camera pose of the corresponding image is moved to compensate for the required shift of the pixels of the image due to tissue displacement.
  • reconstruction is performed using the corrected camera pose for each intraoperative image.
  • 3D reconstruction can be computed without using any jig having a series of radio-opaque markers.
  • intraoperative fluoroscopic images are collected while changing a pose of a C-arrn.
  • a pose estimation is performed using anatomical structures extracted from the preoperative images and the intraoperative fluoroscopic images.
  • pose estimation is performed as described in as described in International Patent Application Publication No.
  • WO/2015/101948 the contents of which are incorporated herein by reference in their entirety, and which teaches“using first imaging modality to obtain at least one first image of chest; perform manual or automatic segmentation of natural body cavities such as bronchial airways in 3D space; acquire at least one images or sequence of video frames from second imaging modality, such as fluoroscopy or DSA; generation of two-dimensional augmented image generated from second imaging modality that combines unique information to describe the full or partial map of natural body cavities such as portion of bronchial airway tree, abovementioned as augmented bronchogram; calculate registration between first and second imaging modalities through pose estimation by fitting abovementioned corresponded features”.
  • body structure are ribs.
  • all poses can be solved by a method such as bundle adjustment.
  • this calculation takes into account an assumed trajectory' of the C-arm when the C- arm is rotated during acquisition of intraoperative images.
  • adding the C ⁇ arm movement trajectory as a constraint limits the possible camera poses for each intraoperative image, thereby decreasing the number of degrees of freedom.
  • Figure 13 shows an exemplary' method for completing missing information due to limited imaging angles.
  • the quality of reconstruction can be improved by completing missing information.
  • step 1310 a series of intraoperative images is acquired.
  • step 1320 the pose is estimated for each of the acquired intraoperative images.
  • step 1330 virtual images are generated for orientations that are missing from the acquired series of images.
  • the missing view's for some orientations can be synthetically produced by using, for example, virtual fluoroscopy images, generated from a CT image.
  • missing views are generated by using a machine learning based approach such as Generative Adversarial Networks (GANs) to generate images for missing orientations, trained with a large number of real images.
  • GANs Generative Adversarial Networks
  • reconstruction is performed using both the real images acquired in step 1310 and the virtual images generated in step 1330.
  • the reconstructed volume is registered to a preoperative volume determined based on preoperative imaging.
  • registration can be performed in accordance with any of the methods described above with reference to Figures 1-6.
  • the registered preoperative and reconstructed volumes may be suitable for use in the generation of augmented images.

Abstract

L'invention concerne une méthode consistant à obtenir au moins deux images peropératoires d'une zone d'intérêt dans un corps d'un patient tandis que la zone d'intérêt est déplacée, au moins une première image desdites deux images peropératoires montrant la zone d'intérêt dans une première position, et au moins une seconde image peropératoire parmi lesdites deux images peropératoires montrant la zone d'intérêt dans une seconde position qui est différente de la première position; à estimer une pose tridimensionnelle de chacune desdites deux images peropératoires; à exécuter une correction pour compenser le mouvement de la zone d'intérêt entre la première position et la seconde image; et à reconstruire un volume tridimensionnel de la zone d'intérêt sur la base de la correction.
PCT/IB2020/000173 2019-02-27 2020-02-27 Méthodes et systèmes de compensation de mouvement pendant une reconstruction tridimensionnelle de structures anatomiques WO2020174284A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962811477P 2019-02-27 2019-02-27
US62/811,477 2019-02-27

Publications (1)

Publication Number Publication Date
WO2020174284A1 true WO2020174284A1 (fr) 2020-09-03

Family

ID=72239302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/000173 WO2020174284A1 (fr) 2019-02-27 2020-02-27 Méthodes et systèmes de compensation de mouvement pendant une reconstruction tridimensionnelle de structures anatomiques

Country Status (1)

Country Link
WO (1) WO2020174284A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023129562A1 (fr) * 2021-12-29 2023-07-06 Noah Medical Corporation Systèmes et procédés d'estimation de pose d'un système d'imagerie
US11816768B1 (en) 2022-12-06 2023-11-14 Body Vision Medical Ltd. System and method for medical imaging

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120071758A1 (en) * 2010-01-12 2012-03-22 Martin Lachaine Feature Tracking Using Ultrasound
US20120245481A1 (en) * 2011-02-18 2012-09-27 The Trustees Of The University Of Pennsylvania Method for automatic, unsupervised classification of high-frequency oscillations in physiological recordings
US20140369584A1 (en) * 2012-02-03 2014-12-18 The Trustees Of Dartmouth College Method And Apparatus For Determining Tumor Shift During Surgery Using A Stereo-Optical Three-Dimensional Surface-Mapping System
US20160035108A1 (en) * 2014-07-23 2016-02-04 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US20170319165A1 (en) * 2014-01-06 2017-11-09 Body Vision Medical Ltd. Surgical devices and methods of use thereof
WO2018236936A1 (fr) * 2017-06-19 2018-12-27 Mahfouz Mohamed R Navigation chirurgicale de la hanche à l'aide d'une fluoroscopie et de capteurs de suivi
US20190035118A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120071758A1 (en) * 2010-01-12 2012-03-22 Martin Lachaine Feature Tracking Using Ultrasound
US20120245481A1 (en) * 2011-02-18 2012-09-27 The Trustees Of The University Of Pennsylvania Method for automatic, unsupervised classification of high-frequency oscillations in physiological recordings
US20140369584A1 (en) * 2012-02-03 2014-12-18 The Trustees Of Dartmouth College Method And Apparatus For Determining Tumor Shift During Surgery Using A Stereo-Optical Three-Dimensional Surface-Mapping System
US20170319165A1 (en) * 2014-01-06 2017-11-09 Body Vision Medical Ltd. Surgical devices and methods of use thereof
US20160035108A1 (en) * 2014-07-23 2016-02-04 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
WO2018236936A1 (fr) * 2017-06-19 2018-12-27 Mahfouz Mohamed R Navigation chirurgicale de la hanche à l'aide d'une fluoroscopie et de capteurs de suivi
US20190035118A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023129562A1 (fr) * 2021-12-29 2023-07-06 Noah Medical Corporation Systèmes et procédés d'estimation de pose d'un système d'imagerie
US11816768B1 (en) 2022-12-06 2023-11-14 Body Vision Medical Ltd. System and method for medical imaging

Similar Documents

Publication Publication Date Title
US10426414B2 (en) System for tracking an ultrasonic probe in a body part
US20200046436A1 (en) Methods and systems for multi view pose estimation using digital computational tomography
US10062174B2 (en) 2D/3D registration
US9811913B2 (en) Method for 2D/3D registration, computational apparatus, and computer program
Markelj et al. Robust gradient-based 3-D/2-D registration of CT and MR to X-ray images
Van Der Bom et al. Evaluation of optimization methods for intensity-based 2D-3D registration in x-ray guided interventions
Tomazevic et al. 3-D/2-D registration by integrating 2-D information in 3-D
Uneri et al. Deformable registration of the inflated and deflated lung in cone‐beam CT‐guided thoracic surgery: Initial investigation of a combined model‐and image‐driven approach
JP2005521502A (ja) 胸部および腹部の画像モダリティの重ね合わせ
WO2020174284A1 (fr) Méthodes et systèmes de compensation de mouvement pendant une reconstruction tridimensionnelle de structures anatomiques
Wang et al. 3-D tracking for augmented reality using combined region and dense cues in endoscopic surgery
Hatt et al. Real-time pose estimation of devices from x-ray images: Application to x-ray/echo registration for cardiac interventions
Gu et al. Extended capture range of rigid 2d/3d registration by estimating riemannian pose gradients
He et al. Online 4-D CT estimation for patient-specific respiratory motion based on real-time breathing signals
KR101767069B1 (ko) 치료계획용 4d mdct 영상과 치료시 획득한 4d cbct 영상 간 영상 정합 및 종양 매칭을 이용한 방사선 치료시 종양 움직임 추적 방법 및 장치
Hatt et al. Robust 5DOF transesophageal echo probe tracking at fluoroscopic frame rates
Bögel et al. Respiratory motion compensation using diaphragm tracking for cone-beam C-arm CT: A simulation and a phantom study
Lee et al. Breathing-compensated neural networks for real time C-arm pose estimation in lung CT-fluoroscopy registration
US20230030343A1 (en) Methods and systems for using multi view pose estimation
Wein et al. Ultrasound based respiratory motion compensation in the abdomen
Ma et al. Echocardiography to magnetic resonance image registration for use in image-guided cardiac catheterization procedures
Gao et al. Rapid image registration of three-dimensional transesophageal echocardiography and X-ray fluoroscopy for the guidance of cardiac interventions
Wang et al. A pulmonary deformation registration framework for biplane x-ray and ct using sparse motion composition
De Luca et al. Speeding-up image registration for repetitive motion scenarios
Wodzinski et al. Application of demons image registration algorithms in resected breast cancer lodge localization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762749

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20762749

Country of ref document: EP

Kind code of ref document: A1