CN112741689B - Method and system for realizing navigation by using optical scanning component - Google Patents

Method and system for realizing navigation by using optical scanning component Download PDF

Info

Publication number
CN112741689B
CN112741689B CN202011514169.0A CN202011514169A CN112741689B CN 112741689 B CN112741689 B CN 112741689B CN 202011514169 A CN202011514169 A CN 202011514169A CN 112741689 B CN112741689 B CN 112741689B
Authority
CN
China
Prior art keywords
information
instrument
target tissue
light
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011514169.0A
Other languages
Chinese (zh)
Other versions
CN112741689A (en
Inventor
王少白
侯尧
周武建
朱峰
边智琦
张凯
李军军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhuoxin Medical Technology Co ltd
Original Assignee
Shanghai Zhuoxin Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhuoxin Medical Technology Co ltd filed Critical Shanghai Zhuoxin Medical Technology Co ltd
Priority to CN202011514169.0A priority Critical patent/CN112741689B/en
Publication of CN112741689A publication Critical patent/CN112741689A/en
Application granted granted Critical
Publication of CN112741689B publication Critical patent/CN112741689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2063Acoustic tracking systems, e.g. using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Abstract

A method and system for implementing navigation using an optical scanning component, comprising: the device has a flexible body and at least one light scanning component including an emitting illumination component and a light receiving component disposed along a length of the flexible body, the light scanning component including a structured light/ToF laser scanning component; the structured light/ToF laser is actively projected onto the anatomical structure using the emitting illumination assembly, and the returned structured light/ToF laser is processed through interpretation to create a computer model of the anatomical structure. Planning a path leading to a target tissue position in an anatomical structure, and determining matching position information including a branch intersection and the target tissue position or three-dimensional information at the intersection; when the instrument needs to be navigated to a target tissue position, the key position information related to the fact that the instrument reaches each key node is preliminarily determined through the length information that the flexible main body of the instrument stretches into the channel, and the channel information that the instrument needs to be routed at the current branch intersection is matched to guide the instrument to move along the channel. The invention compares the images related to the key nodes for a plurality of times, and completes navigation more directly and more effectively.

Description

Method and system for realizing navigation by using optical scanning component
Technical Field
The invention relates to a virtual navigation method and a virtual navigation system for realizing navigation of a device to a target tissue position, in particular to the technical field of navigation by using a tool in the treatment process of a lung disease patient, and particularly relates to tool navigation used in the lung of the patient.
Background
Image-guided surgery assists a surgeon in maneuvering a medical instrument to a target tissue location within a patient so that a therapeutic and/or diagnostic medical procedure may be performed on the target. For guidance, the pose (i.e., position and orientation) of the working end of the medical instrument may be tracked and an image displayed along with or superimposed on the model of the anatomical structure associated with the target. For ease of description, the anatomy is illustrated in the lung, which illustrates how prior art approaches to navigating an instrument through the pulmonary capillary pathway to a target tissue location (e.g., biopsy or treatment site).
Lung cancer has a very high mortality rate, especially in cases where it is not diagnosed in its early stages. National lung cancer screening tests have shown that mortality can be reduced if early detection is performed using diagnostic scans such as Computed Tomography (CT) on a population at risk for the disease. Although CT scanning increases the ability to detect small lesions and nodules in the lung, there is still a need for biopsy and cytological examination of these lesions and nodules before diagnostic conclusions can be made and treatment can be performed. To perform a biopsy and administer multiple treatments, it is necessary to navigate a tool within the lung to the biopsy or treatment site. Accordingly, improvements for navigation systems and navigation methods are constantly sought.
Medical personnel, such as a physician or doctor, can use the navigation system to perform tasks such as: planning a path to a target tissue location, navigating a medical instrument to the target tissue location, and navigating a variety of tools, such as a Locatable Guide (LG) and/or a biopsy tool, to the target tissue location. ENB (electromagnetic navigation) surgery generally involves at least two phases: (1) planning a path to a target located within or adjacent to a lung of a patient; and (2) navigating the probe to the target along the planned path. These phases are commonly referred to as (1) "planning" and (2) "navigation".
Prior to the planning phase, the patient's lungs are imaged by, for example, a Computed Tomography (CT) scan, although other applicable imaging methods will be appreciated by those skilled in the art. The image data collected during the CT scan may then be stored in, for example, digital imaging and communications in medicine (DICOM) format, although other applicable formats will be appreciated by those skilled in the art. The CT scan image data may then be loaded into a planning software application ("app") for use in a planning phase of the ENB procedure. The application may generate a three-dimensional (3D) model of the patient's lungs using the CT scan image data. Wherein the 3D model may include a model airway tree that corresponds to the real airway of the patient's lungs and shows the different channels, branches, and bifurcations of the patient's real airway tree. Additionally, the 3D model may include a 3D rendering of lesions, markers, vessels, and/or pleura. While the CT scan image data may have gaps, omissions, and/or other defects included in the image data, the 3D model is a smooth representation of the patient's airway in which these gaps, omissions, and/or defects in the CT scan image data are filled in or corrected.
The 3D model is registered with the patient's real lung prior to initiating the navigation phase of the ENB procedure. One possible pairing method involves navigating a locatable guide into each lobe of a patient's lung to reach at least a second bifurcation of the airway of that lobe.
Patent No. 201280034693.5 discloses a method for registering a computer model of an anatomical structure with a medical instrument, i.e. how to register a 3D model with the real lung of a patient for guidance to a lung biopsy or treatment site. The method comprises the following steps: global registration of the computer model with the medical device is periodically performed by determining a pose and shape of the medical device when the medical device is disposed in a passageway (e.g., an airway) of an anatomical structure (e.g., a lung) and by matching at least the determined shape of the medical device with a most suitable one of the shapes of one or more potential passageways in a computer model of the anatomical structure, then performing local registration of the computer model with the medical device by comparing an image captured by the image capture device with a plurality of virtual views of the computer model of the anatomical structure, wherein the plurality of virtual views are generated from a perspective of a virtual camera whose pose is initially set at a distal end of the medical device and then perturbed around the initial pose.
With respect to the existing time-coded structured light (time-coded structured light), it can provide a rather fine stereo scanning result. The scanning mode is to project structured light with different phase displacements and spatial frequencies to the surface of an object, and then acquire a plurality of images of the structured light deformed due to the surface profile of the object by using an image acquisition device so as to obtain complete surface information of the object through image analysis. Suzhou university discloses in 201910384397.1 a method and a system for structured light scanning-based liver surgery navigation, wherein the method comprises the following steps:
reconstructing a three-dimensional image of the surface of the liver of a patient before an operation according to the CT image, finding out a focus point and planning an operation path;
projecting coded structured light to the surface of the liver of a patient in an operation, scanning the surface of the liver of the patient in real time, simultaneously acquiring scanning information in real time, reconstructing the surface of the liver of the patient in real time, and displaying a reconstructed three-dimensional image on a 3D display;
registering a three-dimensional image reconstructed by CT before an operation and a three-dimensional image reconstructed in real time during the operation to find out the accurate position of a focus point; outputting registration parameters through intraoperative real-time registration, correcting preoperative surgical path planning in real time, and displaying the real-time corrected surgical path on a 3D display; the position information of the surgical instrument and the liver of the patient is acquired in real time during the operation, so that the position of the surgical instrument is corrected in real time.
The method has the advantages of strong stability, high accuracy and real-time performance.
The method is to project the coded structured light to the surface of the liver of the patient, the structured light stereo scanning device can not move in principle in the whole process, and the whole process has no size requirement. However, when the structured light scanning device is applied to a human body such as a lung, which requires a deep airway, the size of the structure required to be used is very limited, and the entire structured light scanning device needs to be moved in the airway during the entire navigation. It becomes critical if accurate navigation is possible.
Therefore, the existing systems and methods described above have the following problems:
firstly, the liver surgery navigation method based on structured light scanning is directly applied to the human body including the lung which needs to go deep into the human body and move dynamically, and is not applicable.
Secondly, the patent provides a system for registering a computer model of an anatomical structure with a medical device, which requires additional hardware such as an image capturing device and a software part developed for registration, and has high development cost and long development period, and higher cost is added for medical machines such as hospitals. Most importantly, in the whole registration process, image acquisition is continuously carried out, then software registration is carried out, the calculation amount is large, the navigation speed is slow, and the intraoperative process is easily influenced.
Then, the branches on the channel become thinner and thinner, and there is a limitation in an image captured using the image capturing apparatus. The image capturing device may be a stereo or monoscopic (monoscopic) camera arranged at the distal end, the current smallest miniature camera may be 3 cm in size, the left and right bronchi are divided into second-stage bronchi at the pulmonic portal, the range governed by the second-stage bronchi and the branches thereof constitutes a lung lobe, each second-stage bronchus is divided into a third-stage bronchus, the range governed by each third-stage bronchus and the branches thereof constitutes a lung segment, the bronchi repeatedly branch in the lung up to 23-25 stages, finally forming alveoli, and the minimum diameter of the bronchi reaches millimeter level. That is, there may be a problem that the navigation cannot be accurately performed to the target tissue location determination position by using only the image capturing device, and there is a distance between the navigation and the target tissue location determination position, that is, the navigation is not accurate.
In addition, the patent only provides registration of a computer model of an anatomical structure and a medical instrument, navigation is performed after the computer model is compared by acquiring the current posture of the medical instrument in real time, the accuracy is not enough, a large number of calibration calculations for correction and adjustment are also performed, the calculation amount is large, the invalidity is high, a large amount of navigation time is occupied, a flexible main body is further arranged on the medical instrument, a far-end sensor is arranged on the flexible main body, a plurality of medical instruments are provided with acquisition devices, the access channel is narrower and narrower, and the problem that the channel is narrow is solved.
Disclosure of Invention
The first objective of the present invention is to provide a fast navigation method for navigating a device to a target tissue location, so as to solve the technical problems of high cost, large calculation amount, slow navigation speed and easy influence on the intraoperative process in the prior art.
The second objective of the present invention is to provide a fast navigation system for navigating a device to a target tissue location, so as to solve the technical problems of high cost, large calculation amount, slow navigation speed and easy influence on the intraoperative process in the prior art.
A method for using a light scanning component to effect navigation for rapid navigation of a device to a target tissue site using a structured light/ToF laser scanning component, comprising:
s10: the device has a flexible body and at least one light scanning component including an emitting illumination component and a light receiving component disposed along a length of the flexible body, the light scanning component including a structured light/ToF laser scanning component;
s20: actively projecting structural light/ToF laser to the anatomical structure by using an emission lighting assembly, wherein the structural light is received by the photosensitive receiving assembly after being reflected on the spatial surface of the anatomical structure to calculate the deformation of the structural light so as to determine the depth of the anatomical structure, or the ToF laser calculates the time difference or the phase difference from emission to reflection so as to form distance depth data, so that a computer model of the anatomical structure is built;
s30: planning a path leading to a target tissue position in an anatomical structure, and determining key nodes and corresponding three-dimensional information including a branch intersection and the target tissue position;
s40: positioning an instrument in a channel of the anatomy when the instrument requires navigation to a target tissue location;
s50: preliminarily determining the key position of the instrument relative to each key node reached by the instrument according to the length information of the flexible main body of the instrument extending into the channel, acquiring the three-dimensional information of the current key position acquired by the end part of the flexible main body in real time, extracting the feature information of the three-dimensional information of the key position, registering the feature information and the virtual information corresponding to the computer model, and matching the channel information of the instrument needing to be routed at the current branch intersection to guide the instrument to walk through the channel.
In the present invention, S10 further includes disposing the emission illumination assembly and the light receiving assembly on a flexible body along the length of the flexible body, respectively, wherein an opening is opened at an end of the length of the flexible body to serve as a projection window for the emission illumination assembly to emit the ToF laser/laser pattern to the surface of the anatomical space through the projection window, a receiving window is disposed at a predetermined distance from the projection window, and the light receiving assembly is disposed at a position corresponding to the receiving window.
In the present invention, S10 further includes at least two flexible bodies, the end of each flexible body is respectively provided with the emission lighting assembly and the light receiving assembly, and a projection window and a receiving window are correspondingly provided, the projection window is provided at a position corresponding to the emission lighting assembly, and the receiving window corresponds to the light receiving assembly.
The present invention may further comprise:
s60: and performing ultrasonic detection through an ultrasonic probe arranged at the end part of the flexible main body, and matching to obtain the position relation information of the instrument and the target tissue position or matching to obtain a channel to be routed.
Determining the critical position associated with the arrival of the instrument at each critical node further comprises:
when a planned path leading to a target tissue position in an anatomical structure is obtained, numbering each bifurcation of the planned path in advance according to a numbering rule;
and determining the key position related to the key node every time, judging whether the key node is a target tissue position or not through the number, matching to obtain the position relation information of the instrument and the target tissue position if the key node is the target tissue position, and numbering the next key node according to the numbering rule if the key node is the target tissue position, otherwise matching to obtain a channel of the instrument to be routed at the current branch intersection.
In the present invention, the step S30 of determining the three-dimensional information corresponding to the key node further includes:
the method comprises the steps of establishing a virtual three-dimensional information matching library of a bifurcation related to key nodes in advance, wherein a plurality of three-dimensional information including at least one of three-dimensional point cloud information and three-dimensional patch information is stored in the virtual three-dimensional information matching library of the bifurcation in advance, and the virtual three-dimensional information is acquired at a preset interval or at a core position within a preset length in front of the bifurcation in a passage.
In the present invention, the step of registering the three-dimensional information of the position where the feature information corresponds to the computer model in step S50 further includes:
the flexible main body end light receiving part acquires three-dimensional information of the current key position and judges the number of current branch channels in advance;
if the number of the current bifurcate channels is larger than two channels, extracting three-dimensional information of each channel, and registering to obtain a channel to be routed;
and if the number of the current bifurcation channels is two, registering to obtain the channel to be routed according to the extracted three-dimensional information and the pre-stored three-dimensional information corresponding to the computer model.
In the present invention, step S60 further includes:
when the narrowness of the channel is less than the threshold value or the target tissue position is not in the channel, starting ultrasonic probe ultrasonic detection, matching to obtain the position relation information of the instrument and the target tissue position or matching to obtain the channel to be routed, further comprising:
starting an ultrasonic probe to obtain an annular scanning ultrasonic image and extracting shape parameter information;
and matching the shape parameter information with the image information of the corresponding position of the computer model to obtain the position relation information of the instrument and the target tissue position or matching to obtain a channel to be routed.
The present invention may further comprise:
the device is further provided with an instrument operation part of the surgical robot;
and when the position relation information of the instrument and the target tissue position is obtained, navigating an instrument operation component of the instrument to move to a relevant position so as to perform instrument operation comprising one of biopsy taking, puncture, multiple energy ablation, ablation and resection.
A rapid navigation system that enables navigation of a device to a target tissue location, comprising:
a storage device: pre-storing computer model information of an anatomical structure of a patient, storing a planned path leading to a target tissue position in the anatomical structure, and determining and storing key nodes and corresponding image information including a branch intersection and the target tissue position;
a medical instrument having a flexible body and a light scanning component comprising at least an emitting illumination component and a light receiving component distributed along a length of the flexible body, the light scanning component comprising a structured light/ToF laser scanning component for actively projecting structured light/ToF laser to an anatomical structure by using the emitting illumination component, the structured light being received by the light receiving component after reflection on a spatial surface of the anatomical structure to calculate a deformation of the structured light to determine a depth of the anatomical structure or the ToF laser calculating a time difference or a phase difference from emission to reflection to form distance depth data, thereby creating a computer model of the anatomical structure;
the processing device at least comprises an optical processor and a navigation processor, wherein the optical processor acquires image information of the current key position acquired by an optical scanning component at the end part of the flexible main body in real time under the triggering of the system processor, and extracts the characteristic information of the image of the key position; the system processor preliminarily determines the key position of the instrument relative to each key node by the length information of the flexible main body of the instrument extending into the channel, and matches the channel information of the instrument needing to be routed at the current branch intersection by the three-dimensional information of the key position of the optical processor and the three-dimensional information corresponding to the computer model so as to guide the instrument to go through the channel.
The present invention may further comprise:
distributing at least one ultrasonic probe along the length of the flexible body;
the processing device further comprises an ultrasonic processor, and the ultrasonic processor is matched to obtain the position relation information of the instrument and the target tissue position or is matched to obtain a channel to be routed.
The present invention may further comprise:
and instrument operation parts of a surgical robot are distributed along the length of the flexible main body and used for navigating the instrument operation parts of the instrument to move to relevant positions after the position relation information of the instrument and the target tissue position is obtained so as to perform instrument operation including one of biopsy taking, puncture, multiple energy ablation, ablation and resection.
When the path planning is carried out on the established computer model, key nodes are confirmed in advance, the key nodes mainly comprise branch intersections and target tissue positions, the three-dimensional information of relevant parts or key positions of the key nodes is compared during navigation in an operation, a channel of an instrument route is confirmed, and the position relation between the current position of the instrument and the target tissue is mainly obtained at the accurate position of the target tissue. In other words, the inventor compares the three-dimensional information of the key position for a plurality of times, and more directly and effectively completes navigation, and the comparison times are limited, the precision is high, and the comparison speed is high.
Drawings
FIG. 1A is a schematic diagram of a rapid navigation system implementing the present invention for navigating a device to a target tissue location; FIG. 1B is a schematic diagram of an example of a light scanning component of the present invention;
FIG. 2 is a modeled graph of a lung;
FIG. 3 is a flow chart of a method for implementing rapid navigation of a device to a target tissue location;
FIG. 4 is an exemplary illustration of a lung requiring rapid navigation;
fig. 5 is a diagram illustrating a plan of fig. 4.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
The working end position and orientation of the medical instrument may be tracked and an image displayed along with or superimposed on the model of the anatomy is associated with the target. The model may be computer-generated from pre-operative and/or intra-operative patient anatomical scan data, such as X-ray, ultrasound, fluoroscopy, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and other imaging techniques. Displaying an image of a target on which a therapeutic and/or diagnostic medical procedure is to be performed, a model of an anatomical structure in which the target resides or is adjacent, and a working end of a medical instrument superimposed on the anatomical structure model is particularly useful to a surgeon in order to provide assistance in guiding the medical instrument through natural and/or artificial body passageways to and through the anatomical structure to a target tissue location. However, when the anatomy is neither fixed nor rigid, but instead moves and/or changes shape according to periodic or aperiodic motion of the anatomy, such as is the case with the lungs or heart beat of a patient, it can be very difficult to properly register the model with the medical instrument, for which reason the present invention achieves rapid navigation of the device to the target tissue location intraoperatively, an efficacy that must be achieved.
Image-guided surgery helps a surgeon manipulate a medical instrument intra-operatively to a target within a patient so that a therapeutic and/or diagnostic medical procedure may be performed on the target. The image in the image guide is a two-dimensional image in a broad sense, that is, may be three-dimensional image information. With the further development of the robot surgery system, the application range is wide, and the robot surgery system has a large number of clinical applications. The surgeon may manipulate the machine away from the table to perform the procedure. Therefore, guiding the medical instrument to the target tissue position not only allows medical staff to operate the relevant instrument to perform treatment and/or diagnosis medical procedures including biopsy, puncture, multiple energy ablation, resection and the like, but also can directly arrange the robot surgical instrument therein and directly guide the robot surgical instrument to the corresponding position to complete the relevant treatment. The medical device may not only be an endoscope, a catheter or a medical instrument having a steerable tip and a flexible body capable of conforming to a body passageway leading to a target in a patient's anatomy, but may also include a guide (LG), a biopsy tool, a robotic surgical device or a robotic surgical portion device, and the like. The target tissue location may be a biopsy or treatment site, and navigating the device to the target tissue location not only means literally navigating the above-mentioned medical instrument to the relevant biopsy or treatment site, but rather it is understood broadly to mean navigating the medical instrument to a location where a therapeutic and/or diagnostic medical procedure is to be performed on it. During operation, the medical instrument can be guided to navigate to an accurate position only by accurately knowing the specific position relation between the current medical instrument and the target tissue so as to perform treatment and/or diagnosis.
The most core innovation points of the invention are as follows:
firstly, in the whole operation, the time is life, when the path planning is carried out on the established computer model, key nodes are confirmed in advance, the positions of the key nodes mainly comprise a fork intersection and a target tissue, image information comparison (comprising comparison of two-dimensional image characteristic information and comparison of a three-dimensional image) is carried out only aiming at relevant parts of the key nodes in the intraoperative navigation, information comparison of the fork intersection is carried out, a channel of an instrument route is confirmed, and the position relation between the position of the current instrument and the target tissue is mainly known from the accurate position of the target tissue. In other words, the inventor compares the two-dimensional or three-dimensional information of the key node position for a plurality of times, so that the navigation is completed more directly and more effectively, the comparison times are limited, the precision is high, and the comparison speed is high.
Secondly, the end part of the flexible main body is provided with an optical scanning component or a lens of the endoscope or the ultrasonic probe, and the real-time image data of the lens of the endoscope or the optical scanning component is utilized to acquire the image data in real time or the ultrasonic information of the ultrasonic probe is matched and positioned with the earlier planning in real time. The invention can utilize the existing mature information acquisition technology and image processing technology to carry out secondary development in the invention, only the computer model and the acquired data of the optical scanning component and the image data acquired by the lens of the endoscope or the ultrasonic probe and post-processed in real time are needed to be matched, thereby not only reducing the whole development cost, but also reducing the time and cost for development, more importantly, reducing the difficulty of the whole development and improving the whole matching precision, and for the medical institution in the field, the whole purchase cost can be reduced.
With the vigorous development of structured light and ToF laser technology, a rather fine stereoscopic scanning structure can be provided, and the smaller the relevant instruments are, the surgical navigation of the present invention is generally divided into three phases 1) establishing a computer model 2) navigating a medical instrument to image capture and contrast the relevant part of the key node involved in the position required for performing a therapeutic and/or diagnostic medical procedure on the medical instrument in order to confirm the pathway of the instrument route; 3) and when the narrowness of the channel is less than a threshold value or the target tissue position is not in the channel, matching to obtain the position relation information of the instrument and the target tissue position or matching to obtain the channel to be routed. The invention may be implemented using structured light/ToF laser technology to create the computer model in the first stage, or both in the first and second stages. While the structured light/ToF laser technology has been developed to provide smaller dimensions, it is also possible to implement the third stage function. Of course, the endoscope and ultrasound present invention may be used as an alternative to achieve the above-described functions.
First embodiment
In this example, structured light/ToF laser technology is used to implement image acquisition and comparison between the computer model and the relevant part of the key node, so as to confirm the pathway of the instrument route. And when the narrowness of the channel is less than a threshold value or the target tissue position is not in the channel, obtaining the position relation information of the instrument and the target tissue position by adopting ultrasonic matching or obtaining the channel to be routed by matching.
Referring to FIG. 1A, a lung is taken as an example to illustrate a rapid navigation system for navigating a device to a target tissue location according to an embodiment of the present invention. A medical rapid navigation system 100 includes a steerable medical instrument 110, one or more fiber optic leads 120 inserted into the medical instrument 110, a light processor 130, an ultrasound processor 140, a light scanning component 141, an ultrasound probe 142, an actuator 143 for performing therapeutic and/or diagnostic medical procedures, a display processor 150, a navigation processor 160, and a memory 161.
Although shown as separate units, the light processor 130, the ultrasound processor 140, the display processor 150, and the navigation processor 160 may each be implemented as hardware, firmware, software, or a combination thereof, which interact with or are otherwise executed by one or more computer processors.
The basic principle of the structured light technology is that light rays with certain structural characteristics are projected onto a shot object through an emission lighting assembly (such as an infrared laser), and then collected by a special camera (namely a light receiving assembly). The light with a certain structure can acquire different image phase information according to different depth areas of a shot object, and then the change of the structure is converted into depth information through an arithmetic unit (namely a light processor) so as to obtain a three-dimensional structure. In short, the three-dimensional structure of the object to be photographed is acquired by an optical means, and the acquired information is applied more deeply.
ToF is one of the schemes of 3D depth cameras, and is a college teacher and brother of structured light. ToF ranging has two types, single-point ranging and multipoint ranging, wherein multipoint ranging is generally used in the present invention. The principle of multi-point ranging is similar to that of single-point ranging of pulses, but its light receiving part is a CCD, i.e. a photodiode array held with charges, having an integral characteristic for light response. The basic principle is that a laser source emits laser light with a certain view angle, wherein the laser light duration is dt (from t1 to t2), each pixel of the CCD controls the period of time for which the charge holding element of each pixel collects the intensity of the reflected light by using two synchronous trigger switches S1(t1 to t2) and S2(t2 to t2+ dt), and responses C1 and C2 are obtained. The distance L of the object from each pixel is 0.5 c dt c2/(c1+ c2), where c is the speed of light (this formula can remove the effect of the difference in the reflection characteristics of the reflectors on the distance measurement). In short, a processed light is emitted and reflected back after hitting a measured object, and the back-and-forth time is captured, and because the light speed and the wavelength of the modulated light are known, the distance to the object can be rapidly and accurately calculated, and thus the depth map of the measurement of the measured object is obtained.
The scanning component 141 is directly disposed on the distal end 111 of the flexible main body 114 of the medical device 110, and the image processing and transmitting device can be directly used as the light processor 130, or the core processing part of the image processing and transmitting device can be integrated into a new processor, but the function of the image processing and transmitting device can be realized.
Considering that the smaller the size of the flexible body 114, the better, the end 117 of the distal end 111 of the flexible body 114 is formed into a receiving space in which the emitting illumination component and the light receiving component are disposed.
In one example, the end portion 117 is provided with two openings, which are respectively used as a projection window and a receiving window, for emitting the ToF laser/laser pattern to the surface of the anatomical space through the projection window by the emission illumination assembly, the receiving window is provided at a predetermined distance from the projection window, and the light receiving assembly is provided at a position corresponding to the receiving window.
Referring to fig. 2, a reference example of the emitting illumination assembly and the light receiving assembly is given below by way of example only. Which may include a projector 101 (i.e., an example of the above-described emissive illumination assembly), a camera 102 (an example of a light receiving component), and a controller or controller 103 (i.e., either the above-mentioned light processor 130 or a controller 103 separately disposed within the end portion 117 to make processing data faster). In operation, the controller 103 sends a reference light pattern 104 to the projector 101, and the projector 101 projects the reference light pattern 104 onto a scene represented by a line 105. The camera 102 captures a scene with the projected reference light pattern 104 as an image 106. The image 106 is transmitted to the controller 103, and the controller 103 generates a depth map 107 based on the parallax of the reference light pattern photographed in the image 106 with respect to the reference light pattern 104. The depth map 107 includes predicted depth information corresponding to patches (patches) of the image 106 (i.e., the image data of the pre-acquisition locations mentioned above). In one embodiment, the controller 103 may control the projector 101 and the camera 102 to be synchronized in an epipolar manner. In addition, the projector 101 and camera 102 may form a hyper-photon (ultrasound) projector/scanner system that may be used to illuminate the scene 105 line by line in an epipolar manner using high peak power, short duration light pulses. The controller 103 may be a microprocessor or personal computer programmed via software instructions, an application specific integrated circuit, or a combination of both. In one embodiment, the processing provided by the controller 103 may be implemented entirely via software, software accelerated by a Graphics Processing Unit (GPU), a multi-core system, or dedicated hardware capable of performing processing operations. Both hardware and software configurations may provide different phases in parallel.
Both the emitting illumination assembly and the light receiving assembly are disposed at an end 117 of the distal end 111 of the flexible body 114 thereof, and two openings, possibly of a certain size, are provided in the end 117.
For this reason, the invention may further provide the emission lighting assembly and the light receiving assembly on a flexible body along the length of the flexible body, that is, the receiving window and the projection window corresponding to the emission lighting assembly and the light receiving assembly are respectively opened on the side surface of the flexible body, for example, the projection window is located on the side surface near the end position, and the receiving window is located on the side surface near the projection window by the preset distance D. In this arrangement, a smaller end of the flexible body can be provided. During receiving, only the distance between the receiving window and the projection window in the D section is added to the receiving algorithm. Namely, the method further comprises the steps of arranging the emission lighting assembly and the light receiving assembly on one flexible body along the length of the flexible body respectively, arranging an opening at the end part of the length of the flexible body to serve as a projection window for the emission lighting assembly to emit the ToF laser/laser pattern to the surface of the anatomical structure space through the projection window, arranging a receiving window at a position which is a preset distance away from the projection window, and arranging the light receiving assembly at a position corresponding to the receiving window.
The invention can also adopt the following scheme: the light emitting and illuminating device comprises at least two flexible main bodies, wherein the end part of each flexible main body is respectively provided with the emitting and illuminating assembly and the light receiving assembly, a projection window and a receiving window are correspondingly arranged, the projection window is arranged at a position corresponding to the emitting and illuminating assembly, and the receiving window corresponds to the light receiving assembly. This way it is also possible to obtain a relatively small size of the end, enabling access to smaller trachea.
The ultrasonic examination is an examination method for diagnosing a disease by displaying and recording a waveform, a curve or an image using a difference in physical properties of ultrasound and acoustic properties of organs and tissues of a human body. There are many types of ultrasound devices. The ultrasonic diagnostic apparatus mainly sends a beam of ultrasonic to a human body prediction part by an ultrasonic probe, scans in a linear, fan or other forms, meets the interface of two tissues with different acoustic impedances, namely, the ultrasonic is reflected back, and after the ultrasonic is received by the probe, the ultrasonic can be displayed on a screen after signal amplification and information processing to form a human body tomogram called as an acoustic image or an ultrasonic image for clinical diagnosis, a plurality of continuous acoustic images are displayed on the screen, and dynamic organ activities can be observed. Because of the difference in depth of the organ-tissue interface in vivo, the echo is received before the time, so that the depth of the interface, the depth of the organ surface and the depth of the back surface can be measured. An ultrasound examination apparatus generally includes an ultrasound probe and an ultrasound processor. The existing small-size ultrasonic probe can be very small, several millimeters or even smaller. The present invention may provide an ultrasound probe on the distal end 111 of the flexible body 114 of the medical instrument 110, and the ultrasound processor may use an existing ultrasound image processing portion, may separately integrate the image processing portion into a new processor, or may integrate the image processing portion into the optical processor 130 or other processing device.
The display processor 150 may be coupled to a main display screen and a secondary display screen. Preferably a computer monitor capable of displaying a three-dimensional image to an operator of the system 100. However, in view of cost, either or both of the main display screen and the auxiliary display screen may be a standard computer monitor capable of displaying only two-dimensional images. Or directly used as the display part of the ultrasonic inspection instrument. Similarly, the main display and the auxiliary display may be only one display.
The medical device 110 has a flexible body 114, a steerable tip 112 at its distal end 111, and a control component 116 at its proximal end 115. A control cable (not shown) or other control device typically extends from the control component 116 to the steerable tip 112 such that the tip 112 can be controllably bent or rotated, such as shown by the dashed line form of the bent tip 112. The medical instrument 110 may be an endoscope, catheter, or other medical implement having a flexible body and a steerable tip. In this example, the tip 112 may be provided with a light scanning component 141, an ultrasound probe 142, and an actuator 143 for performing therapeutic and/or diagnostic medical procedures, respectively.
The light scanning component 141, the ultrasound probe 142 and the actuator 143 may be disposed at the distal end 111, and may be respectively moved back and forth to engage the medical instrument 110 through a mechanical control portion.
In one embodiment, the light scanning unit 141, the ultrasonic probe 142 and the actuator 143 are each connected to the proximal control unit 116 via a cable. And a mechanical actuator may be disposed between the elastic fiber cable and the control part 116, and the control part 116 may control the front and rear expansion and contraction by giving a signal to the mechanical actuator, thereby controlling at least one of the light scanning part 141, the ultrasonic probe 142, and the actuator 143 of its corresponding front end to move back and forth. For example, the mechanical actuator may control the cable to project forward and retract backward. This mechanical structure is very numerous and will not be described in detail here.
In addition, the optical scanning unit 141, the ultrasonic probe 142, and the actuator 143 may be connected to the respective mechanical actuators by self-powered cables. In the structural design, the light scanning unit 141, the ultrasonic probe 142, and the actuator 143 may be separately disposed at the distal end 111.
An actuator through the fiber cable to manipulate the tip 112, and an actuator for moving the entire medical instrument 110 back and forth so that it may be inserted into and withdrawn from the patient through an access port, such as a natural body orifice or one created by the surgeon. The control component 116 controls hardware, firmware, or software (or a combination thereof) in one or more computer processors or in different computer processors. In this embodiment, the flexible body 114 may be passively or actively bendable.
By way of example, an alternative embodiment of the medical instrument 100, wherein the handle 116 is replaced by an electromechanical interface, a controller, and an input device for remotely operating the medical instrument.
As an example, the medical instrument 110 is inserted through an entry portal and extends into the anatomy of a patient. In this example, the anatomical structure is a pair of lungs having a plurality of natural body passages including trachea, bronchi and bronchioles; the inlet is the mouth of the patient; and the medical instrument 110 is a bronchoscope. Due to the properties of the lungs, the medical instrument 110 may be guided through several connected channels of the bronchial tree. In doing so, the flexible body 114 of the medical device 110 conforms to the channel through which it travels. Although a pair of lungs are shown in this example, it should be understood that aspects of the present invention are applicable and useful for other anatomical structures besides the respiratory system, such as the heart, brain, digestive system, circulatory system, and urinary system. Further, while only natural body passageways are shown, the methods described herein are also applicable to artificial or surgeon-created passageways that may be formed during or prior to a medical procedure and superimposed on a computer model of the patient's anatomy.
Application example 1
A structured light projector is disclosed in korean patent No. 10-2018-00920412018.08.07 KR, samsung electronic corporation. The structured light projector is very small in size, and can be directly applied to the rapid navigation system for navigating the device to the target tissue position after being secondarily modified. Namely, it is
A structured light projector, comprising:
a light source configured to emit light;
a structured light pattern mask configured to receive light emitted by the light source, the structured light pattern mask comprising a first region configured to produce first structured light having a first polarization and a second region configured to produce second structured light having a second polarization different from the first polarization; and
a polarization multiplexing deflector configured to deflect the first structured light and the second structured light generated by the structured light pattern mask to different directions, respectively.
The structured light projector is mounted directly to end 117 of distal end 111 of flexible body 114 and is configured to emit first structured light SL1 and second structured light SL2 into a pulmonary anatomy OBJ (the anatomy being a pair of lungs having a plurality of natural body passageways including trachea, bronchi, and bronchioles).
The sensor may be disposed at the end of the distal end 111 of another flexible body 114, or may be disposed at the side of the distal end 111 of the same flexible body 114 near the end. Configured to receive light reflected from object OBJ.
A light processor 130 configured to perform an operation for obtaining shape information of the object OBJ from the light SL1r and SL2r received from the sensors. The structured light projector forms the first polarized first structured light SL1 and the second polarized second structured light SL2 with light emitted by the light source and deflects the first structured light SL1 and the second structured light SL2 to different directions, so when light is emitted to the subject OBJ, a wider field of view can be achieved.
The sensor may sense structured light SL1r and SL2r reflected by object OBJ. The sensor may comprise an array of light detecting elements. The sensor 330 may also include a light dispersing element for analyzing light reflected from the object OBJ according to wavelength. The light processor may obtain depth information of the anatomical OBJ by comparing the structured light SL1 and SL2 emitted to the object OBJ with the structured light SL1r and SL2r reflected from the object OBJ, and may analyze a 3D shape of the anatomical OBJ. Each structured light SL1 and SL2 generated by the structured light projector may be a pattern that is mathematically encoded such that the angle and direction of the light rays and the position coordinates of the bright and dark spots reaching the predetermined focal plane are unique. When such a map is reflected by an anatomical structure OBJ having a 3D shape, the pattern of each reflected structured light SL1r and SL2r may be different from the pattern of each structured light SL1 and SL 2. Depth information of the anatomical structure OBJ may be extracted by comparing the pattern and the tracking pattern according to the coordinates, and a shape of the anatomical structure OBJ may be extracted from the depth information. Considering the application of a structured light projector to the 3D reconstruction of anatomical structures, a model may be built first. When the anatomical structure is a lung, a general model of the lung may be preliminarily stored, and when the model is used by a certain person, the 3D model corresponding to the person can be reconstructed by directly making a rapid motion of the structured light projector in the model.
Application example 2
The present invention may also be used directly with the structured light projector and depth camera disclosed in 201810200423.6, available from OPPO Guangdong Mobile communications Inc., which includes a light source, a collimating element, and a diffractive optical element, mounted directly on the end 117 of the distal end 111 of the flexible body 114. The light source is used for emitting laser. The light source includes a substrate and an array of light emitting elements disposed on the substrate. The substrate includes a first region and a second region contiguous with the first region. The density of the light emitting elements of the first region is different from the density of the light emitting elements of the second region. The collimating element is used for collimating the laser light. The diffractive optical element is used for diffracting the laser light collimated by the collimating element to form a laser light pattern. In the structured light projector and the depth camera, the density of the light emitting elements in the first area is different from that of the light emitting elements in the second area, so that the irrelevance of the laser pattern can be improved, and the speed and the precision of obtaining the depth image of the laser pattern are improved.
For the system described above and taking the lung as an example, as shown in fig. 3, it is a flowchart of a method for implementing fast navigation of the device to the target tissue location. It includes:
s10: the device has a flexible body and at least one light scanning component including an emitting illumination component and a light receiving component disposed along a length of the flexible body, the light scanning component including a structured light/ToF laser scanning component;
s20: actively projecting structural light/ToF laser to the anatomical structure by using an emission lighting assembly, wherein the structural light is received by the photosensitive receiving assembly after being reflected on the spatial surface of the anatomical structure to calculate the deformation of the structural light so as to determine the depth of the anatomical structure, or the ToF laser calculates the time difference or the phase difference from emission to reflection so as to form distance depth data, so that a computer model of the anatomical structure is built;
s30: planning a path leading to a target tissue position in an anatomical structure, and determining three-dimensional information of key positions including a branch intersection and the target tissue position;
s40: positioning an instrument in a channel of the anatomy when the instrument requires navigation to a target tissue location;
s50: preliminarily determining the key position of the instrument relative to each key node reached by the instrument according to the length information of the flexible main body of the instrument extending into the channel, acquiring the three-dimensional information of the current key position acquired by the light receiving part at the end part of the flexible main body in real time, matching the three-dimensional information corresponding to the computer model, and obtaining the channel information of the instrument needing to be routed at the current branch intersection so as to guide the instrument to walk on the channel.
Each step is described in detail below.
S10, S20: scan and build a computer model.
The medical CT image scans the lung, and then the image segmentation processes the 3D modeling to form a 3D model of the organ bronchus, which is a common 3D modeling realization technology in the existing medical science and is realized by corresponding software. The present invention may also employ other different techniques known in the art to implement scan modeling. For example, the solution provided in the article "4D lung model construction of human body using CT cine mode scanning" may be one solution.
The anatomical structure is assumed to be one that moves in an identifiable manner during the medical procedure, such as a periodic movement of the air and blood circulatory system or a non-periodic movement such as a physical response to a stimulus. While aspects of the present invention may still be applicable and useful when the anatomy is not moving during a medical procedure, it is a full advantage of the present invention that it is best experienced in an environment where the anatomy is moving in an identifiable or other known manner during a medical procedure. One or more sets of images of the patient are obtained using suitable imaging techniques, from which a set of three-dimensional (3-D) computer models of the anatomy may be generated, wherein each 3-D computer model is associated with a different point in time over a period of time, such that time represents a fourth dimension, and the images are referred to herein as four-dimensional (4-D) images. Additional dimensions may also be defined and used in the methods described herein. Examples of such imaging techniques include, but are not limited to, fluoroscopy, magnetic resonance imaging, thermography, tomography, ultrasound, optical interference tomography, thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and the like.
Motion within the time period of the capture of the images is dependent on the anatomy and the interest. For example, when the anatomical structure is a lung, a set of images may be used for cyclic motion, wherein the lung is inflated from a maximum expiratory state to a maximum inspiratory state. Another set of images may be used for non-periodic motion such as coughing or other physical response to stimulation resulting in movement of the lungs. As another example, when the anatomical structure is a heart, a set of images may be used for periodic motion, such as blood circulation. The sampling rate that determines the number of such 3-D computer models is selected so that the motion of the anatomy during such periods is adequately described for purposes of accurate registration and navigation.
The light scanning component comprises a structured light/ToF laser scanning component. And actively projecting structural light/ToF laser to the anatomical structure by using an emitting illumination assembly, wherein the structural light is received by the photosensitive receiving assembly after being reflected on the spatial surface of the anatomical structure to calculate the deformation of the structural light so as to determine the depth of the anatomical structure, or the ToF laser calculates the time difference or the phase difference from emission to reflection so as to form distance depth data, so that a computer model of the anatomical structure is built.
Step S30: planning a path leading to a target tissue position in an anatomical structure, and determining key nodes including a branch intersection and the target tissue position and corresponding three-dimensional information.
Taking fig. 5 as an example, the bifurcation ports of the planned path are numbered in advance according to the numbering rule, for example, after entering from the entrance, the bifurcation ports pass through the trachea 35, the bronchi 21, 20, 22, and then the bronchiole 23. The path plan is 35- >21- >20- >22- > 23. And the intersection of two paths in each path planning is a key node. In this example, the intersection between 35 and 21 is the key node. Similarly, 23 and 22, 22 and 23, 23 are all key nodes with the target tissue location.
TABLE 1 planning Path Table
Figure BDA0002846267230000201
Determining the image information corresponding to the key node may further include:
a virtual three-dimensional image matching library of a bifurcation related to a key node is established in advance, a plurality of virtual images are stored in the virtual three-dimensional image matching library of the bifurcation in advance, and the three-dimensional virtual images are stored in preset intervals in the front of the bifurcation in a passage (such as an air pipe 35, a bronchus 21, 20 and 22 and then a bronchiole 23).
If the three-dimensional information can be two-dimensional feature information, the following can be: each virtual image is digitized, and at least one kind of characteristic information including a central point, an area, a shape and a texture is extracted. Generally, after 3D modeling, corresponding instructions or operations are usually set to obtain a virtual image meeting the requirements of an operator, and feature information on the virtual image can be obtained in real time. Such as: an image of a specific point of the bifurcation of the bronchi 21 and 20 is required to be sliced, and feature points including the center point, area and shape on the image can be extracted and stored in the software according to the requirement of the operator. However, if the three-dimensional information is three-dimensional information of an entire area, three-dimensional point cloud information, three-dimensional patch information, or the like, the three-dimensional information may be directly compared.
If the two-dimensional picture information is obtained, the following steps can be adopted:
and respectively storing the feature information corresponding to the virtual images of the bifurcation by taking the bifurcation as a unit in the virtual image matching library.
Table 2 key node information storage table
Figure BDA0002846267230000211
And if the three-dimensional point cloud information or the three-dimensional patch information exists, directly finishing the comparison of the three-dimensional image by using three-dimensional comparison software.
Step S50 is explained in detail.
The key position of the instrument relative to each key node is preliminarily determined through the length information of the flexible body of the instrument extending into the channel. When the flexible body extends into the channel, the length of the flexible body extending into the channel can be controlled by a mechanical actuator. For example, a mechanical actuator may control the length of the flexible body extending into the passageway (e.g., length 118.448 of bronchus 35 + length of bronchus 35 from the entrance — a threshold of possible error in body extension) to ensure that the flexible body containing the optical scanning component 141, ultrasound probe 142, and actuator 143 is located at the bifurcation of the keypoint 1. Generally, the predictive agent can be manipulated to reach the key position associated with the key node by first separating from the bifurcation of the key node 1 by a threshold distance and then performing N small-amplitude insertions. And then, by checking parameters such as the average diameter (mm) of the trachea, whether the light scanning component 141 can enter is judged, if the size cannot enter, the extension of the ultrasonic probe 142 is started and controlled, and the control mode is consistent with the control mode. When the target tissue location is reached, only the actuator 143 may be operated to operate.
When the optical scanning component 141 can be inserted, the light receiving component at the end of the flexible main body is obtained in real time to acquire the three-dimensional information of the current key position, the three-dimensional information directly corresponds to the computer model, and the channel information of the instrument at the current branch intersection needing to be routed is matched to guide the instrument to go through the channel.
If the image information is two-dimensional image information, registering the feature information with virtual image information of the position corresponding to the computer model further comprises:
acquiring image information of the current key position by a part corresponding to the end part of the flexible main body, and judging the number of current bifurcation channels in advance;
if the number of the current bifurcation channels is larger than two channels, extracting the center of each channel, connecting the centers to obtain a geometric shape, and registering to obtain the channel to be routed according to the angle of the geometric shape and the angle information corresponding to the computer model;
and if the number of the current bifurcation channels is two, registering to obtain a channel to be routed according to the extracted at least one type of feature information comprising a central point, an area, a shape and a texture and the pre-stored feature information corresponding to the computer model.
Determining the critical position associated with the arrival of the instrument at each critical node further comprises:
when a planned path leading to a target tissue position in an anatomical structure is obtained, numbering each bifurcation of the planned path in advance according to a numbering rule;
and determining the key position related to the key node every time, judging whether the key node is a target tissue position or not through the number, matching to obtain the position relation information of the instrument and the target tissue position if the key node is the target tissue position, and numbering the next key node according to the numbering rule if the key node is the target tissue position, otherwise matching to obtain a channel of the instrument to be routed at the current branch intersection.
If the three-dimensional information is determined, step S30 determines that the three-dimensional information corresponding to the key node further includes:
the method comprises the steps of establishing a virtual three-dimensional information matching library of a bifurcation related to key nodes in advance, wherein a plurality of three-dimensional information including at least one of three-dimensional point cloud information and three-dimensional patch information is stored in the virtual three-dimensional information matching library of the bifurcation in advance, and the virtual three-dimensional information is acquired at a preset interval or at a core position within a preset length in front of the bifurcation in a passage.
Registering the three-dimensional information of the feature information and the position corresponding to the computer model in step S50 further includes:
the flexible main body end light receiving part acquires three-dimensional information of the current key position and judges the number of current branch channels in advance;
if the number of the current bifurcate channels is larger than two channels, extracting three-dimensional information of each channel, and registering to obtain a channel to be routed;
and if the number of the current bifurcation channels is two, registering to obtain the channel to be routed according to the extracted three-dimensional information and the pre-stored three-dimensional information corresponding to the computer model.
The method can also comprise the following steps:
and performing ultrasonic detection through an ultrasonic probe arranged at the end part of the flexible main body, and matching to obtain the position relation information of the instrument and the target tissue position or matching to obtain a channel to be routed.
It further comprises:
when the narrowness of the channel is less than the threshold value or the target tissue position is not in the channel, starting ultrasonic probe ultrasonic detection, matching to obtain the position relation information of the instrument and the target tissue position or matching to obtain the channel to be routed, further comprising:
starting an ultrasonic probe to obtain an annular scanning ultrasonic image and extracting shape parameter information;
and matching the shape parameter information with the image information of the corresponding position of the computer model to obtain the position relation information of the instrument and the target tissue position or matching to obtain a channel to be routed.
The device is further provided with an instrument operation part of the surgical robot; and when the position relation information of the instrument and the target tissue position is obtained, navigating an instrument operation component of the instrument to move to a relevant position so as to perform instrument operation comprising one of biopsy taking, puncture, multiple energy ablation, ablation and resection.
When the path planning is carried out on the established computer model, key nodes are confirmed in advance, the key nodes mainly comprise branch intersections and target tissue positions, three-dimensional information comparison is carried out only aiming at relevant parts of the key nodes during intraoperative navigation, a channel of an instrument route is confirmed, and the position relation between the position of the current instrument and the target tissue is mainly obtained at the accurate position of the target tissue. In other words, the inventor compares the three-dimensional information related to the key nodes for a plurality of times, so that the navigation is completed more directly and more effectively, the comparison times are limited, the precision is high, and the comparison speed is high.
Second embodiment
While further advances in structured light/ToF laser technology can provide smaller dimensions, the system of ultrasound is not required, and the light scanning component and light processor are used directly to perform the above-described functions of ultrasound.
Third embodiment
The present invention may also be supplemented with a corresponding image acquisition and processing by the addition of an endoscope system. Only the end part is required to be added with an endoscope lens, and an endoscope image processor is added on a processor.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (3)

1. A rapid navigation system for enabling navigation of a device to a target tissue site, comprising:
a storage device: pre-storing computer model information of an anatomical structure of a patient, storing a planned path leading to a target tissue position in the anatomical structure, and determining and storing key nodes and corresponding image information including a branch intersection and the target tissue position;
a medical instrument having a flexible body and a light scanning component comprising at least an emitting illumination component and a light receiving component distributed along a length of the flexible body, the light scanning component comprising a structured light/ToF laser scanning component that utilizes the emitting illumination component to actively project structured light/ToF laser on an anatomical structure, the structured light being received by the light receiving component after reflection at a spatial surface of the anatomical structure to calculate a deformation of the structured light to determine a depth of the anatomical structure or the ToF laser calculates a time difference or phase difference from emission to reflection to form distance depth data, thereby creating a computer model of the anatomical structure;
the processing device at least comprises an optical processor and a navigation processor, wherein the optical processor acquires image information of a current key position acquired by an optical scanning component at the end part of the flexible main body in real time under the triggering of the navigation processor, and extracts the characteristic information of an image where the key position is located; the navigation processor preliminarily determines the key position of the instrument relative to each key node by the length information of the flexible main body of the instrument extending into the channel, matches the channel information of the instrument needing to be routed at the current branch intersection by the three-dimensional information of the key position of the optical processor and the three-dimensional information corresponding to the computer model to guide the instrument to go through the channel, and determines the three-dimensional information corresponding to the key nodes further comprises: the method comprises the steps that a virtual three-dimensional information matching library of a bifurcation related to key nodes is established in advance, a plurality of pieces of three-dimensional information including at least one of three-dimensional point cloud information and three-dimensional patch information are stored in the virtual three-dimensional matching library of the bifurcation in advance, the virtual three-dimensional information is acquired in a preset distance or a core position in a preset length before the bifurcation in a passage, if the three-dimensional information includes two-dimensional characteristic information, each virtual image digitally extracts at least one piece of characteristic information including a central point, an area, a shape and a texture, and the comparison of the three-dimensional information is a comparison of at least one piece of characteristic information including the central point, the area, the shape and the texture, so that the comparison of the three-dimensional information of key positions for a plurality of times is realized.
2. The system of claim 1, further comprising:
distributing at least one ultrasonic probe along the length of the flexible body;
the processing device further comprises an ultrasonic processor, the ultrasonic processor is used for obtaining the position relation information of the instrument and the target tissue position in a matching mode or obtaining a channel needing to be routed in a matching mode, when the narrowness of the channel is smaller than a threshold value or the target tissue position is not in the channel, an ultrasonic probe is started for ultrasonic detection, the position relation information of the instrument and the target tissue position is obtained in a matching mode or the channel needing to be routed is obtained in a matching mode, the ultrasonic probe is started, a circular scanning ultrasonic image is obtained, and shape parameter information is extracted; and matching the shape parameter information with the image information of the corresponding position of the computer model to obtain the position relation information of the instrument and the target tissue position or matching to obtain a channel to be routed.
3. The system of claim 1 or 2, wherein: further comprising:
and instrument operation parts of a surgical robot are distributed along the length of the flexible main body and used for navigating the instrument operation parts of the instrument to move to relevant positions after the position relation information of the instrument and the target tissue position is obtained so as to perform instrument operation including one of biopsy taking, puncture, multiple energy ablation, ablation and resection.
CN202011514169.0A 2020-12-18 2020-12-18 Method and system for realizing navigation by using optical scanning component Active CN112741689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011514169.0A CN112741689B (en) 2020-12-18 2020-12-18 Method and system for realizing navigation by using optical scanning component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011514169.0A CN112741689B (en) 2020-12-18 2020-12-18 Method and system for realizing navigation by using optical scanning component

Publications (2)

Publication Number Publication Date
CN112741689A CN112741689A (en) 2021-05-04
CN112741689B true CN112741689B (en) 2022-03-18

Family

ID=75648290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011514169.0A Active CN112741689B (en) 2020-12-18 2020-12-18 Method and system for realizing navigation by using optical scanning component

Country Status (1)

Country Link
CN (1) CN112741689B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565739A (en) * 2022-03-01 2022-05-31 上海微创医疗机器人(集团)股份有限公司 Three-dimensional model establishing method, endoscope and storage medium
TWI810054B (en) * 2022-09-02 2023-07-21 財團法人工業技術研究院 Bronchoscopy navigation method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356155A (en) * 2013-06-24 2013-10-23 清华大学深圳研究生院 Virtual endoscope assisted cavity lesion examination system
CN103371795A (en) * 2012-04-18 2013-10-30 索尼公司 Image processing apparatus and image processing method
CN105208909A (en) * 2013-04-17 2015-12-30 西门子公司 Method and device for stereoscopic depiction of image data
CN109259806A (en) * 2017-07-17 2019-01-25 云南师范大学 A method of the accurate aspiration biopsy of tumour for image guidance
WO2019245818A1 (en) * 2018-06-19 2019-12-26 Intuitive Surgical Operations, Inc. Systems and methods related to registration for image guided surgery
CN111772792A (en) * 2020-08-05 2020-10-16 山东省肿瘤防治研究院(山东省肿瘤医院) Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8900131B2 (en) * 2011-05-13 2014-12-02 Intuitive Surgical Operations, Inc. Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery
US10154239B2 (en) * 2014-12-30 2018-12-11 Onpoint Medical, Inc. Image-guided surgery with surface reconstruction and augmented reality visualization
US20170280970A1 (en) * 2016-03-31 2017-10-05 Covidien Lp Thoracic endoscope for surface scanning
EP3576663A4 (en) * 2017-02-01 2020-12-23 Intuitive Surgical Operations Inc. Systems and methods of registration for image-guided procedures
US10262453B2 (en) * 2017-03-24 2019-04-16 Siemens Healthcare Gmbh Virtual shadows for enhanced depth perception
US10346978B2 (en) * 2017-08-04 2019-07-09 Capsovision Inc. Method and apparatus for area or volume of object of interest from gastrointestinal images
US10521916B2 (en) * 2018-02-21 2019-12-31 Covidien Lp Locating tumors using structured light scanning
US20220248943A1 (en) * 2019-06-17 2022-08-11 Shenzhen Sibernetics Co., Ltd. Magnetic control device of capsule endoscope and method for controlling movement of capsule endoscope in tissue cavity
CN211883710U (en) * 2019-08-15 2020-11-10 成都术通科技有限公司 Endoscope subassembly convenient to formation of image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103371795A (en) * 2012-04-18 2013-10-30 索尼公司 Image processing apparatus and image processing method
CN105208909A (en) * 2013-04-17 2015-12-30 西门子公司 Method and device for stereoscopic depiction of image data
CN103356155A (en) * 2013-06-24 2013-10-23 清华大学深圳研究生院 Virtual endoscope assisted cavity lesion examination system
CN109259806A (en) * 2017-07-17 2019-01-25 云南师范大学 A method of the accurate aspiration biopsy of tumour for image guidance
WO2019245818A1 (en) * 2018-06-19 2019-12-26 Intuitive Surgical Operations, Inc. Systems and methods related to registration for image guided surgery
CN111772792A (en) * 2020-08-05 2020-10-16 山东省肿瘤防治研究院(山东省肿瘤医院) Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning

Also Published As

Publication number Publication date
CN112741689A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
JP7433932B2 (en) Quantitative 3D imaging and printing of surgical implants
US11730562B2 (en) Systems and methods for imaging a patient
JP7154832B2 (en) Improving registration by orbital information with shape estimation
US9554729B2 (en) Catheterscope 3D guidance and interface system
CN112741692B (en) Rapid navigation method and system for realizing device navigation to target tissue position
JP6976266B2 (en) Methods and systems for using multi-view pose estimation
KR101572487B1 (en) System and Method For Non-Invasive Patient-Image Registration
CA2935873C (en) Surgical devices and methods of use thereof
EP2709512B1 (en) Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery
US10543045B2 (en) System and method for providing a contour video with a 3D surface in a medical navigation system
CN108451639B (en) Multiple data source integration for positioning and navigation
KR20180104763A (en) POSITION ESTIMATION AND CORRECTION SYSTEM AND METHOD FOR FUSION IMAGING SYSTEM IN IMAGING
JP2017513662A (en) Alignment of Q3D image with 3D image
JP2019511931A (en) Alignment of Surgical Image Acquisition Device Using Contour Signature
EP3494548B1 (en) System and method of generating and updating a three dimensional model of a luminal network
JP2001245880A (en) Method of judging position of medical instrument
CN112741689B (en) Method and system for realizing navigation by using optical scanning component
CN114286650A (en) Registering magnetic tracking systems using interferometric measurement systems
EP3782529A1 (en) Systems and methods for selectively varying resolutions
US20230062782A1 (en) Ultrasound and stereo imaging system for deep tissue visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant