WO2023031688A1 - Modalités combinées d'imageries multiples dans des interventions chirurgicales - Google Patents

Modalités combinées d'imageries multiples dans des interventions chirurgicales Download PDF

Info

Publication number
WO2023031688A1
WO2023031688A1 PCT/IB2022/055511 IB2022055511W WO2023031688A1 WO 2023031688 A1 WO2023031688 A1 WO 2023031688A1 IB 2022055511 W IB2022055511 W IB 2022055511W WO 2023031688 A1 WO2023031688 A1 WO 2023031688A1
Authority
WO
WIPO (PCT)
Prior art keywords
kidney
probe
tumor
data
images
Prior art date
Application number
PCT/IB2022/055511
Other languages
English (en)
Inventor
Arie Rond Shaul
Original Assignee
Rsip Neph Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rsip Neph Ltd. filed Critical Rsip Neph Ltd.
Publication of WO2023031688A1 publication Critical patent/WO2023031688A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4416Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to combined acquisition of different diagnostic modalities, e.g. combination of ultrasound and X-ray acquisitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/371Surgical systems with images on a monitor during operation with simultaneous use of two cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/378Surgical systems with images on a monitor during operation using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras

Definitions

  • the subject matter disclosed herein relates in general to methods and systems for performing surgical procedures, in particular partial nephrectomy, and in particular to a method and system for combining multi-imaging modalities for use in surgical procedures.
  • Computer-vision tools are widely used in surgical procedures.
  • One application is in surgical procedures involving renal cancer.
  • Renal cancer is one of the 10 most common cancers, with a 1-2% lifetime risk of developing it.
  • Common treatment requires surgical removal of the tumor in addition to standard oncologic healthcare.
  • a partial nephrectomy (nephron/kidney sparing nephrectomy) is preferable as it leaves the healthy part of the kidney intact, allowing better long-term recovery.
  • nephrectomy can be done via open, minimally invasive, or robotic assisted surgery (RAS).
  • RAS robotic assisted surgery
  • the benefits of RAS are the smaller incisions, fewer complications, and faster recovery, while maintaining the full surgical freedom of an open surgery.
  • the main challenge of robotic assisted partial nephrectomy is to remove the tumor or diseased tissue entirely, while sparing the maximal amount of healthy renal tissue.
  • Current practice utilizes standard visual cues to detect the tumor.
  • the surgeon assesses the tumor location based on anatomy and references from pre-op Computed Tomography (CT)/ Magnetic Resonance Imaging (MRI)/Ultrasound (US) scans and dissects around the approximated location with a security buffer.
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • US Ultrasound
  • there is a method including during a partial nephrectomy surgical procedure, tracking an ultrasound (US) probe, generating a three-dimensional (3D) representation of multiple US images of a kidney and/or a tumor with known orientation relative to the tracked laparoscopic US probe, overlaying in real-time the generated 3D representation of multiple US images with intraoperative data to obtain overlaid data, and displaying the overlaid data on the intraoperative camera data.
  • US ultrasound
  • 3D three-dimensional
  • the method further includes overlaying pre-operative 3D computer tomography (CT) and/or Magnetic Resonance Imaging (MRI), or associated 3D anatomical model images on said intraoperative camera data.
  • CT computer tomography
  • MRI Magnetic Resonance Imaging
  • the intraoperative camera data includes stereo imaging from at least two cameras.
  • the US probe includes a laparoscopic probe.
  • the method includes detecting landmarks on the kidney and/or the tumor.
  • the method further includes detecting a position of landmarks on the kidney and/or the tumor.
  • the method further includes tracking the position of the US probe using triangulation.
  • the method further includes displaying the overlaid data on the intraoperative camera data in real time.
  • the method further includes performing segmentation on the 3D US images.
  • a system comprising: one or more processors; and one or more memories storing instructions executable by the one or more processors, and which cause the system to perform the following steps: tracking an ultrasound (US) probe; generating a three-dimensional (3D) representation of multiple US images of a kidney and/or a tumor with known orientation relative to the tracked laparoscopic US probe; overlaying in real-time the generated 3D representation of multiple US images with intraoperative camera data to obtain overlaid data; and displaying the overlaid data on the intraoperative camera data, wherein the overlaid data display relates to a tumor and/or a kidney during a partial nephrectomy surgical procedure.
  • US ultrasound
  • 3D three-dimensional
  • system is further configured to overlay pre-operative 3D computer tomography (CT) and/or Magnetic Resonance Imaging (MRI), or associated 3D anatomical model images on the intraoperative camera data.
  • CT computer tomography
  • MRI Magnetic Resonance Imaging
  • the intraoperative camera data includes stereo imaging from at least two cameras.
  • the US probe includes a laparoscopic probe.
  • system is further configured to detect landmarks on the kidney and/or the tumor.
  • the generating includes deforming the kidney and/or tumor.
  • system is further configured to detect a position of landmarks on the kidney and/or the tumor.
  • system is further configured to track the position of the US probe using triangulation.
  • system is further configured to display the overlaid data on the intraoperative camera data in real time.
  • system is further configured to perform segmentation on the 3D US images.
  • FIG. 1 shows an exemplary flow chart of a method of combining multi-imaging modalities for use in surgical procedures, optionally for use in a robotic-assisted partial nephrectomy;
  • FIG. 2 A shows an exemplary flow chart of a method for automatic segmentation of the PRE-OP CT model described in FIG. 1;
  • FIG. 2B shows an exemplary porcine kidney segmentation on a CT scan obtained with a standard scanning protocol
  • FIG. 3A shows an exemplary flow chart of a method for 3D US creation and automatic segmentation of the INTRA-OP US model described in FIG. 1;
  • FIG. 3B shows an exemplary 3D US image of a section of an organ, for example the kidney, created from multiple 2D US images, as represented by 2D US image slice and 2D US image slice;
  • FIG. 3C shows an exemplary porcine kidney segmentation on a US scan obtained with a standard scanning protocol
  • FIGS. 4A and 4B show exemplary flow charts of a method for the segmentation registration creation described in FIG. 1 which is prepared during the development stage;
  • FIG. 4C is an exemplary implementation of the US-CT segmentation registration described in the method shown in FIG. 1 and FIG. 4A and 4B;
  • FIG. 5 shows an exemplary method for continuous tracking using RGB cameras when the US probe is not in contact with the kidney, described in block FIG. 1;
  • FIG. 6 shows exemplary incision lines and tumor margins used in the planning stage
  • FIG. 7 schematically illustrates an exemplary combined multi-imaging system (CMIS).
  • CMIS combined multi-imaging system
  • the current disclosure discloses a method and a system for combining pre-op imaging modalities (e.g., CT) with intra-op imaging modalities such as stereo video and US to allow for more accurate removal of a segment of an organ, optionally involving RAS.
  • pre-op imaging modalities e.g., CT
  • intra-op imaging modalities such as stereo video and US
  • the disclosed method and system are described herein below, for exemplary purposes, in relation to partial nephrectomy. Nonetheless, it should be apparent to those versed int the art that similar methods and systems can be applied to other surgical procedures (e.g., partial hepatectomy).
  • Diagnosis of renal tumors and the preparation for a partial nephrectomy often include a CT scan.
  • robotic assisted partial nephrectomy US is used for visibility into the tissue and stereo cameras are used for depth perception. Combining information from the CT scan, live US images, and stereo cameras can significantly improve surgical outcomes and reduce complication rates and procedural time.
  • FIG. 1 shows an exemplary flow chart of a method 100 of combining multi-imaging modalities for use in surgical procedures, optionally for use in a robotic-assisted partial nephrectomy.
  • a pre-operational (PRE-OP) CT and/or MRI 3D model of a patient is acquired and automatically segmented to mark all relevant areas of the kidney, as well as the tumor.
  • CT and/or MRI may be jointly referred to as “CT.”
  • block 102 A more detailed description of block 102 is described further on below with reference to FIGS. 2 A and 2B and a method 200 for automatic segmentation of the PRE-OP CT model.
  • an INTRA-OP 3D US image is created by tracking a US probe using a set of video cameras.
  • the INTRA-OP 3D US model is segmented to mark the relevant areas of the kidney.
  • block 104 A more detailed description of block 104 is described further on below with reference to FIGS. 3A, 3B, and 3C, and a method 300 for automatic segmentation of the INTRA-OP US model.
  • the segmented regions of the INTRA-OP US model and the PRE-OP CT model are registered one to the other, allowing for possible deformations.
  • the registration of the PRE-OP CT model segmentation and INTRA-OP US model segmentation may be performed at the beginning of the surgical procedure and may be continuously updated throughout the surgical procedure, optionally based on tracking information obtained from a set of two or more stereo cameras. Additionally, or alternatively, the registration may be performed at the beginning ofthe surgical enhancement stage.
  • the registration process between the PRE-OP CT model segmentation and INTRA-OP US model segmentation may be repeated until optimal registration has been achieved.
  • US-CT registration is a key step for navigation during the surgical procedure.
  • the registration may utilize one or more of the following matching techniques:
  • physical deformation models may be used to produce more realistic deformation.
  • the deformations model may be added as constraints to the registration process, to better define the optimization problem, and to produce representation of the deformations, among other effects.
  • deep learning may be employed using a U-nettype NN which takes as input the segmentations of both CT and US and outputs a displacement vector for each voxel to form the CT-US registration. Additionally, or alternatively, other NNs may be used.
  • block 106 A more detailed description of block 106 is described further on below with reference to FIGS. 4A, 4B, and 4C, and in a method 400 of INTRA-OP RGB tracking, optionally updating the initial US-CT segmentation registration.
  • INTRA-OP tracking, and registration of the kidney cortex is performed using two or more RGB cameras.
  • the RGB camera registration may be performed for continuous tracking, and optionally covering situations where the algorithm loses track of the kidney.
  • the entire RGB registration process may be repeated periodically or continuously to maintain accurate registration throughout the surgical procedure.
  • the RGB cameras may be used to find the locations of visible landmarks, for example, histogram of oriented (HOG) features, for calculating a transformation between the landmarks at two separate times. A transformation based on these landmarks is added to the US-CT segmentation registration to get a real-time displacement of the regions.
  • multiple images may be used at each time stamp to get multiple readings of each location and perform bundle adjustment to get a finer location of each feature.
  • block 108 A more detailed description of block 108 is described further on below with reference to FIG. 5 and a method 500 for continuous tracking using RGB cameras, additionally when the US probe is not in contact with the kidney.
  • the PRE-OP CT model is overlaid onto the RGB camera stream to assist the surgeon in conducting the surgical procedure.
  • This may include an accurate real time representation of the tumor and its position, optionally when not fully visible on the one or more RGB cameras, the position of blood vessels and other anatomical structures to be avoided during the surgical procedure, cues where to make the initial incision, and where to resect the kidney to remove the tumor while minimizing removal of healthy tissue.
  • the overlay process may be re-applied periodically throughout the surgical procedure.
  • FIG. 2A shows an exemplary flow chart of a method 200 for automatic segmentation of the PRE-OP CT model described in block 102. It is noted that method 200 may be executed prior to the start of the surgical procedure, and may, in some embodiments, include more or less steps, and/or variations in the sequence of steps.
  • a 3D renal CT or MRI model (PRE-OP CT model) is acquired prior to the surgical procedure.
  • Acquisition of the 3D renal CT or MRI model may include marking the tumor and/or diseased sections of the kidney, as well as regions of the kidney such as but not limited to renal pelvis, cortex, arteries, veins, and ureter.
  • the PRE-OP CT model may include multiphase CT/MRI scans including consecutive image acquisitions in separate phases of the contrast agent enhancement stages, to allow visibility of different vascular structures.
  • a block 204 registration of the multiphase scans of the PRE-OP CT model is performed.
  • Methods use in performing the registration may include use of different methods, for example, mutual information maximization registration.
  • multi -phase CT segmentation is performed on the PRE-OP CT model.
  • the segmentation may be performed using deep learning techniques such as, for example, the UNET neural network (NN) model.
  • various landmarks may be identified and labelled in the 3D PRE-OP CT model, and may include use of deep learning methods.
  • a same deep learning model may be used to perform both segmentation and identification/labelling.
  • the 3D model may undergo pre-processing prior to segmentation, such as but not limited to denoising and image sharpening, to enhance the accuracy of the segmentation.
  • the pre-processing can be performed using deep learning models.
  • FIG. 2B shows an exemplary porcine kidney segmentation 250 on a CT scan obtained with a standard scanning protocol.
  • the porcine kidney segmentation 250 includes the renal cortex 252, the renal pelvis 254, and a tumor 256.
  • the scan was made in an experimental setting with standard scanning parameters, and resembles a CT scan which may be acquired in block 202 of method 200.
  • the segmentation may be performed in block 206 of method 200.
  • FIG. 3 A shows an exemplary flow chart of a method 300 for 3D US creation and automatic segmentation of the INTRA-OP US model described in block 104. It is noted that method 300 may be executed at the start of the surgical procedure with some of the steps repeated during the surgical procedure. Furthermore, in some embodiments, method 300 may include more or less steps, and/or variations in the sequence of steps.
  • a US probe for acquiring images may be introduced into the renal area at the beginning of the surgical procedure.
  • At least two stereo cameras positioned in renal area may be used to detect the US probe, which may include landmarks or keypoints for detection, optionally dedicated detection markers.
  • the detection may include use of deep learning.
  • the deep learning algorithm may include use of a regression network for 6 degrees-of-freedom (DOF) calculation of the probe’s position and orientation.
  • DOF degrees-of-freedom
  • a number of 2D US images of the renal area are acquired during the surgical procedure.
  • the location and orientation of the US probe may be repeatedly determined, optionally continuously, during the acquisition process.
  • pre-processing of the acquired US images may be performed using various computer vision and deep-learning algorithms to provide an optimal basis for image fusion.
  • the pre-processing may include, example, denoising the US images by fdtering methods such as anisotropic fdters, enhancing image sharpness, or learned forms of pre-processing where a NN is trained to restore corrupted images.
  • a location and orientation of the US probe relative to the set of stereo cameras may be determined, optionally using triangulation based on the stereo cameras detection of the landmarks. Triangulation mathematical formulas and/or methods such as bundle adjustment may be used to increase location and orientation accuracy.
  • calibration of the stereo cameras may be performed in a two-step process using a pre-defined calibration jig.
  • a calibration grid may be used which may include a checkerboard of a known size optionally including ArUco markers or a more complex predefined pattern.
  • stereo images of the calibration may be acquired, followed by a second mathematical calculation step to obtain intrinsic parameters and offset. These parameters may include camera focal length, camera lens distortion, and location of the camera in 3D space. It is noted that calibration may also be performed more than once during the surgical procedure.
  • the US probe and the cameras may include position sensors which may be used to determine the location and the orientation of the US probe relative to the set of stereo cameras.
  • the acquired 2D US images are paired with the position data of the US probe and remodelled into a 3D scan.
  • a grid is first defined. As the location of each pixel is known, interpolation may be performed to get measurements for all voxels in a 3D volume.
  • a minimal bounding rectangular shape may be fit to all the acquired voxels, for example, by maximizing the overlap between the rectangle and the voxels.
  • the rectangle may define a grid wherein its voxels may be calculated using standard interpolation tools.
  • segmentation of the 3D INTRA-OP US model may be performed.
  • the segmentation may include multi-organ segmentation and may include a subset of the organs labelled in the PRE-OP CT model, but not limited to, renal pelvis, cortex, arteries, veins, and ureter.
  • the segmentation may be done using deep learning techniques such as, for example, a U-net neural network model.
  • specific landmarks on the kidney model may be detected using U-net or other suitable NN model.
  • kidney cortex keypoints from the stereo cameras can be added to the segmentation of the 3D INTRA-OP US model
  • blood vessels may be detected using Doppler US, segmented with deep leaning techniques, such as, for example, the U-net neural network model or other suitable NN model, and fused with the regular US image.
  • Doppler US segmented with deep leaning techniques, such as, for example, the U-net neural network model or other suitable NN model, and fused with the regular US image.
  • US and doppler images are combined to form a more detailed input for a neural network model (e.g., U-net) that performs simultaneous segmentation of all the above organs
  • further refinement of the 3D US scan may be performed using optimization techniques that force global segmentation slice correspondence to CT segmentation.
  • further refinement may be performed after segmentation.
  • the techniques may include use of single shot detection (SSD), normal cross correlation (NCC), or mutual information (MI) optimization.
  • FIG. 3B shows an exemplary 3D US image 350 of a section of an organ 352, for example the kidney, created from multiple 2D US images, as represented by 2D US image slice 354 and 2D US image slice 356.
  • a US probe 360 with optional detection markers 362 detectable by at least two stereo cameras, generates the 2D US image slice 354 which includes a slice of a tumor 364.
  • US probe 360 In a second position 366, US probe 360 generates the 2D US image slice 356 which also shows the tumor 364.
  • the 2D image slices may be generated in block 306 of method 300.
  • the 3D US image 350 may be generated in block 310 of method 300, as indicated by arrow 368.
  • FIG. 3C shows an exemplary porcine kidney segmentation 370 on a US scan obtained with a standard scanning protocol.
  • the porcine kidney segmentation 370 includes the renal cortex 372, the renal pelvis 374, and a tumor 376.
  • the scan was made in an experimental setting with standard scanning parameters, and resembles a US scan which may be acquired in block 306 of method 300.
  • the segmentation may be performed in block 310 of method 300.
  • FIGS. 4A and 4B show exemplary flow charts of a method 400 for the segmentation registration algorithm creation described in block 106 which is prepared during the development stage.
  • Method 400 may be divided in two stages, a first stage which involves creating a training dataset, and a second stage which involves the actual training of the NN, for example, U-net NN. It is noted that the training dataset creation stage and the training stage in method 400 are executed, as previously stated, during development. Furthermore, in some embodiments, method 400 may include more or less steps, and/or variations in the sequence of steps.
  • FIG. 4A which shows, in some embodiments, an exemplary flowchart of a training set creation:
  • fiducial markers are embedded in training kidneys used for creating the training dataset.
  • the fiducial markers may later serve to compute a displacement vector for each voxel during the US-CT registration, thereby allowing a more exact computation of the loss function.
  • an undeformed kidney is scanned using CT.
  • the kidney may be a model kidney or, for example, a porcine kidney.
  • the kidneys are manipulated (deformations) are added to the training CT and US scans to create realistic data.
  • the deformations may include mechanical movement, perfusion, and cutting deformations.
  • the deformed kidney is scanned using US.
  • the fiducial markers are detected in the CT scan of the undeformed kidney, and in the US scan of the deformed kidney. The locations of the fiducial markers are recorded for later use in the training.
  • the displacement of the fiducial markers is calculated for use in the training.
  • the CT scan of the undeformed kidney and the US scan of the deformed kidney are input to the NN, optionally by using images or using segmentations.
  • the locations of the fiducial markers from block 410, the manual segmentation from block 412, and the displacement of the fiducial markers from block 416, are input to the NN. These serve to create the ground truth of each input.
  • the ground truth is used to calculate the loss during training.
  • the loss may include a dice score between segmentation and ⁇ or MSE of the fiducial in the undeformed CT scan and the location of the fiducial in the deformed US scans.
  • the fiducial markers may be detected and matched in the CT and US scans for measurement by the loss function.
  • the NN is trained.
  • accuracy of the registration may be validated by comparing a registration score of other segmented regions.
  • the registration score may be defined as the dice similarity between labels of the same structures.
  • the fiducial markers artificially inserted into the kidney and on the boundary of a validation specimen are compared before and after registration, where the distance between them indicates the quality of the result.
  • re-registration using a new set of US images may be performed to improve accuracy and account for organ deformation. Identified landmarks from the previous stages may be used as a starting point of the re-registration process.
  • FIG. 4C Shown in FIG. 4C is an exemplary implementation 450 of the US-CT segmentation registration described in block 106 of method 100, and in method 400.
  • a CT segmentation 452 of an undeformed organ for example an undeformed kidney, is embedded with fiducial markers numbered 1 - 9, as represented by fiducial marker no. 2 454.
  • a US segmentation 456 of the kidney following deformation also includes the fiducial markers 1 - 9, as represented by fiducial marker no. 2 458, all of which have moved relative to their position in the CT segmentation 452 due to the deformation.
  • the position change of the fiducial markers 1 - 9 is represented by the change in position of fiducial marker no. 2 458 in US segmentation 456 relative to fiducial marker no. 2 454 in CT segmentation 452.
  • the US-CT segmentation registration output 460 shows an overlay of the CT fiducials and the US fiducials. The displacement of each fiducial is more noticeable in the overlay image. In the training step 428 the distance can be
  • FIG. 5 shows an exemplary method 500 for continuous tracking using RGB cameras when the US probe is not in contact with the kidney, described in block 108.
  • images of the renal area are acquired using the RGB stereo cameras. Image keypoints are detected by the cameras in 3D space.
  • images of the renal area are continuously acquired using the RGB stereo cameras.
  • Image keypoints are detected by the cameras in 3D space, and are continuously or repeatedly matched to the image keypoints detected at starting time (in block 502).
  • the transform resulting from matching the image keypoints is computed.
  • an optimal transform (e.g., affine transform) is calculated that minimizes the distance between the original keypoints after transform to the new keypoints
  • method 100 may be used to plan a RAS procedure.
  • the RAS procedure includes a partial nephrectomy.
  • the planning may involve several stages; buffer zone (tumor margin) planning, laparoscopic port entry planning, and incision line planning.
  • a buffer zone is defined around the tumor by using a fixed or user configurable margin in physical units, and dilating a tumor mask.
  • a NN such as U-net
  • planning a laparoscopic port entry may be done on the 3D scan data.
  • the planning may be done manually, for example by displaying a 3D model of the kidney, tumor, and recommended margin, and of the patient’s body, and using 3D modelling software to display from the expected angle and annotate the port location.
  • the port planning may be automated by training a neural network on a database of surgeon annotated ports.
  • planning an incision line on or within the kidney for effective tumor removal may be performed on 3D scan data.
  • the planning may be done manually.
  • the planning may be done by fitting an optimal plane to sides of tumor margin.
  • the planning may be done by training a neural network based on a database of multiple planned and verified incision lines.
  • FIG. 6 shows exemplary incision lines and tumor margins 600 used in the planning stage.
  • Image 602 is a PRE-OP 3D kidney model created from CT segmentation.
  • Image 604 is Intra - OP US image with the plan overlayed on top of the real time display
  • image 606 is an INTRAOP RGB image.
  • the incision lines and tumor margins are indicated by arrow 608 in image 602, arrow 610 in image 604, and arrow 612 in image 606, respectively.
  • FIG. 7 schematically illustrates an exemplary CMIS 700.
  • CMIS system 700 may include a 3D CT/MRI processing module 702, a 3D US processing module 704, a US-CT Registration module 706, a RGB processing module 708, an Overlay module 710, a Controller (CTER) 712, a Neural Network (NN) 714 which may optionally be a CNN, a Memory (MEM) 716, and a Library (LIB) 718.
  • CMIS 700 may process INPUT DATA 720 and, by applying the teachings described in method 100 and shown in FIG. 1, may generate OUTPUT DATA 722.
  • INPUT DATA 720 may include 3D CT scans and/or MRI scans, 2D US scans, and images acquired by means of at least two stereo cameras, optionally RGB cameras.
  • OUTPUT DATA 722 may include imaging data suitable for use in RAS applications, for example, for performing partial nephrectomy RAS.
  • 3D CT/MRI processing module 702 may perform the functions described in in block 102 of method 100, 3D US processing module 704 those described in block 104 of method 100, and RGB processing module 708 those functions described in block 108 of method 100. Additionally, US-CT Registration module 706 may perform those functions described in block 106 of method 100, and Overlay module 710 those described in block 110 of method 100.
  • CTRL 712 may control the operation of all components in CMIS 700.
  • MEM 716 may store software executable by CTRL 712 required to control operations of the CMIS components.
  • LIB 718 may store datasets which may be required during a development, optionally during training of NN 716 and/or used required by the NN in the execution of tasks associated with method 100.
  • Some stages (steps) of the aforementioned method(s) may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of the relevant method when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the disclosure.
  • Such methods may also be implemented in a computer program for running on the computer system, at least including code portions that make a computer execute the steps of a method according to the disclosure.
  • a computer program is a list of instructions such as a particular application program and/or an operating system.
  • the computer program may for instance include one or more of: a subroutine, a function, a procedure, a method, an implementation, an executable application, an applet, a servlet, a source code, code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
  • the computer program may be stored internally on a non-transitory computer readable medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system.
  • the computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as LLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.
  • a computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process.
  • An operating system is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources.
  • An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.
  • the computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices.
  • I/O input/output
  • the computer system processes information according to the computer program and produces resultant output information via I/O devices.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Vascular Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne un procédé consistant, pendant une intervention chirurgicale de néphrectomie partielle, à suivre une sonde ultrasonore (US) ; à générer une représentation tridimensionnelle (3D) de multiples images US d'un rein et/ou d'une tumeur avec une orientation connue par rapport à la sonde US laparoscopique suivie ; à superposer en temps réel la représentation 3D générée de multiples images US avec des données peropératoires pour obtenir des données superposées ; et à afficher les données superposées sur les données de caméra peropératoires.
PCT/IB2022/055511 2021-09-01 2022-06-14 Modalités combinées d'imageries multiples dans des interventions chirurgicales WO2023031688A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163239532P 2021-09-01 2021-09-01
US63/239,532 2021-09-01
US202163254728P 2021-10-12 2021-10-12
US63/254,728 2021-10-12

Publications (1)

Publication Number Publication Date
WO2023031688A1 true WO2023031688A1 (fr) 2023-03-09

Family

ID=85412009

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2022/055511 WO2023031688A1 (fr) 2021-09-01 2022-06-14 Modalités combinées d'imageries multiples dans des interventions chirurgicales

Country Status (1)

Country Link
WO (1) WO2023031688A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130218024A1 (en) * 2011-10-09 2013-08-22 Clear Guide Medical, Llc Interventional In-Situ Image-Guidance by Fusing Ultrasound and Video
US20130338505A1 (en) * 2012-06-15 2013-12-19 Caitlin Marie Schneider Ultrasound Probe for Laparoscopy
US20150031990A1 (en) * 2012-03-09 2015-01-29 The Johns Hopkins University Photoacoustic tracking and registration in interventional ultrasound
US20170265946A1 (en) * 2013-12-17 2017-09-21 Koninklijke Philips N.V. Shape sensed robotic ultrasound for minimally invasive interventions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130218024A1 (en) * 2011-10-09 2013-08-22 Clear Guide Medical, Llc Interventional In-Situ Image-Guidance by Fusing Ultrasound and Video
US20150031990A1 (en) * 2012-03-09 2015-01-29 The Johns Hopkins University Photoacoustic tracking and registration in interventional ultrasound
US20130338505A1 (en) * 2012-06-15 2013-12-19 Caitlin Marie Schneider Ultrasound Probe for Laparoscopy
US20170265946A1 (en) * 2013-12-17 2017-09-21 Koninklijke Philips N.V. Shape sensed robotic ultrasound for minimally invasive interventions

Similar Documents

Publication Publication Date Title
US9652845B2 (en) Surgical assistance planning method using lung motion analysis
JP2023175709A (ja) 空間トラッキングシステムと拡張現実ディスプレイとのレジストレーション
US10716457B2 (en) Method and system for calculating resected tissue volume from 2D/2.5D intraoperative image data
US20180158201A1 (en) Apparatus and method for registering pre-operative image data with intra-operative laparoscopic ultrasound images
US8942455B2 (en) 2D/3D image registration method
CN107067398B (zh) 用于三维医学模型中缺失血管的补全方法及装置
Reinertsen et al. Clinical validation of vessel-based registration for correction of brain-shift
Song et al. Locally rigid, vessel-based registration for laparoscopic liver surgery
KR20210051141A (ko) 환자의 증강 현실 기반의 의료 정보를 제공하는 방법, 장치 및 컴퓨터 프로그램
EP3788596B1 (fr) Fusion d'images de résolution inférieure à supérieure
JP6644795B2 (ja) 解剖学的オブジェクトをセグメント化する超音波画像装置及び方法
Nosrati et al. Simultaneous multi-structure segmentation and 3D nonrigid pose estimation in image-guided robotic surgery
KR20180116090A (ko) 의료용 네비게이션 시스템 및 그 방법
US11282211B2 (en) Medical imaging device, method for supporting medical personnel, computer program product, and computer-readable storage medium
KR20210052270A (ko) 환자의 증강 현실 기반의 의료 정보를 제공하는 방법, 장치 및 컴퓨터 프로그램
CN112043378A (zh) 对人进行导航支持以关于切除部分进行导航的方法和系统
Plantefeve et al. Automatic alignment of pre and intraoperative data using anatomical landmarks for augmented laparoscopic liver surgery
CN117100393A (zh) 一种用于视频辅助外科手术靶标定位的方法、系统和装置
WO2023031688A1 (fr) Modalités combinées d'imageries multiples dans des interventions chirurgicales
Vagvolgyi et al. Video to CT registration for image overlay on solid organs
KR20200048746A (ko) 뇌혈관 비교 판독 영상 디스플레이 장치 및 방법
JP7269538B2 (ja) 肝臓手術のための位置合わせシステム
Liu et al. CT-ultrasound registration for electromagnetic navigation of cardiac intervention
Favretto et al. A fast and automatic method for 3D rigid registration of MR images of the human brain
Drechsler et al. Simulation of portal vein clamping and the impact of safety margins for liver resection planning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863715

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE