CN113100943A - Navigation processing method, device, system, equipment and medium in physiological channel - Google Patents

Navigation processing method, device, system, equipment and medium in physiological channel Download PDF

Info

Publication number
CN113100943A
CN113100943A CN202110407995.3A CN202110407995A CN113100943A CN 113100943 A CN113100943 A CN 113100943A CN 202110407995 A CN202110407995 A CN 202110407995A CN 113100943 A CN113100943 A CN 113100943A
Authority
CN
China
Prior art keywords
detection information
sensor
sensors
information
kth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110407995.3A
Other languages
Chinese (zh)
Other versions
CN113100943B (en
Inventor
余坤璋
李楠宇
陈日清
徐宏
苏晨晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Kunbo Biotechnology Co Ltd
Original Assignee
Hangzhou Kunbo Biotechnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kunbo Biotechnology Co Ltd filed Critical Hangzhou Kunbo Biotechnology Co Ltd
Publication of CN113100943A publication Critical patent/CN113100943A/en
Application granted granted Critical
Publication of CN113100943B publication Critical patent/CN113100943B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/267Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/267Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
    • A61B1/2676Bronchoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/307Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the urinary organs, e.g. urethroscopes, cystoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/31Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Pulmonology (AREA)
  • Otolaryngology (AREA)
  • Physiology (AREA)
  • Robotics (AREA)
  • Urology & Nephrology (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Endoscopes (AREA)

Abstract

The invention provides a navigation processing method, a device, a system, equipment and a medium in a physiological channel, wherein a catheter is adopted, the catheter is provided with N sensors and an image acquisition part, and the navigation processing method comprises the following steps: after the catheter enters a physiological channel, acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part, wherein the detection information represents the position of the catheter position corresponding to the sensor; extracting a plurality of simulated slice images from a model of the physiological channel according to the detection information; the simulated slice image is as follows: an image observed when the internal passage of the model is observed at a corresponding position of the model; and determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.

Description

Navigation processing method, device, system, equipment and medium in physiological channel
Technical Field
The present invention relates to the medical field, and in particular, to a method, an apparatus, a system, a device, and a medium for navigation processing in a physiological channel.
Background
In medical activities, catheters need to be introduced into physiological channels of animals or human bodies, thereby facilitating endoscopic and biopsy procedures. After the catheter enters the physiologic tunnel, it is often necessary to navigate the position of the catheter within the physiologic tunnel.
In the prior art, a sensor can be arranged on the catheter, the motion track of the sensor is acquired, and then the position of the catheter is positioned through registration between the motion track and the physiological channel detection map. However, in the process, it is difficult to accurately and effectively acquire a continuous motion trajectory, which may affect registration and positioning and reduce the accuracy of navigation in a physiological channel.
Disclosure of Invention
The invention provides a navigation processing method, a navigation processing device, a navigation processing system, navigation processing equipment and a navigation processing medium in a physiological channel, and aims to solve the problem of poor accuracy of navigation in the physiological channel.
According to a first aspect of the present invention, a method for navigation processing in a physiological channel is provided, wherein a catheter is adopted, the catheter is provided with N sensors and an image acquisition portion, the N sensors are sequentially distributed at different positions in the length direction of the catheter, wherein N is greater than or equal to 2;
the navigation processing method comprises the following steps:
after the catheter enters a physiological channel, acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part, wherein the detection information represents the position of the catheter position corresponding to the sensor;
extracting a plurality of simulated slice images from a model of the physiological channel according to the detection information;
and determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
Therefore, the method and the device can provide a reliable basis for positioning and can reflect the form of the physiological channel by using the model of the physiological channel, and on the basis, the reliable, accurate and sufficient basis can be provided for positioning the catheter based on the channel real image and the simulated slice image. Meanwhile, the invention also extracts the simulation slice images based on the detection information of the sensor, thereby avoiding using all the simulation slice images of the model, further effectively reducing the data amount required to be processed by positioning, simplifying the processing flow and improving the processing efficiency through the registration of the local simulation slice images (namely a plurality of simulation slice images).
Optionally, extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information specifically includes:
determining a target channel range in the model of the physiological channel according to the detection information; the target channel range is matched with the channel range where the image acquisition part is actually located;
and extracting the plurality of simulated slice images according to the target channel range.
Optionally, determining a target channel range in the model of the physiological channel according to the detection information specifically includes:
projecting positions represented by detection information of L sensors into a coordinate system of the model, and determining the positions of the L sensors in the coordinate system; wherein L is more than or equal to 1 and less than or equal to N;
and determining the target channel range according to the positions of the L sensors in the coordinate system and the relative intervals between the L sensors and the image acquisition part.
In the above alternative, since the target passage range is determined based on the position of the sensor, and the relative interval between the target passage range and the image capturing unit is relatively fixed, it can be ensured that the target passage range can accurately cover the actual position of the image capturing unit.
Optionally, the L sensors include a sensor adjacent to the image capturing part among the N sensors.
In the above alternative, the sensor adjacent to the image capturing section may help to improve the accuracy of the target passage range (i.e., the actual position of the image capturing section can be covered more accurately).
Optionally, determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images, specifically including:
determining a target image in the plurality of simulated slice images according to the similarity between the channel real image and the plurality of simulated slice images;
and determining the position of the catheter in the physiological channel according to the position of the target image in the model.
In the above alternative, based on the similarity between the images, the target image can be accurately determined, thereby ensuring the accuracy of catheter positioning.
Optionally, before extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information, the method further includes:
according to the actual detection information of the N sensors and the interval length information between the sensors, correcting at least part of the actual detection information of the sensors to obtain corrected detection information, so that: the plurality of simulated slice images are extracted from the modified detection information, and the interval length information is indicative of a length of a portion of the catheter between the sensors in the catheter.
In the scheme, the correction of the detection information is realized, and the correction result can be restrained by the distribution positions of the sensors due to the combination of the interval length information among the sensors in the correction process, so that the accuracy of the corrected detection information is improved.
Optionally, the method for correcting at least part of the actual detection information of the sensors according to the actual detection information of the N sensors and the interval length information between the sensors to obtain corrected detection information specifically includes:
and for any k-th sensor, correcting the actual detection information of the k-th sensor according to the detection information of one or more sensors between the k-th sensor and the physiological channel inlet and the interval length information between the k-th sensor and the k-th sensor, wherein k is greater than or equal to 2, the k-th sensor refers to the k-th sensors distributed in sequence along a target sequence in the N sensors, and the target sequence is opposite to the sequence of the sensors entering the physiological channel in sequence.
Optionally, the step of correcting the actual detection information of the kth sensor according to the detection information of one or more sensors between the kth sensor and the physiological channel inlet and the information of the interval length between the kth sensor and the physiological channel inlet to obtain the corrected detection information of the kth sensor specifically includes:
predicting at least part of detection information of the kth sensor according to actual detection information or corrected detection information of the mth sensor and interval length information between the kth sensor and the mth sensor to obtain predicted detection information of the kth sensor; wherein m is less than k;
and correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain the corrected detection information of the kth sensor.
In the above-described embodiments, the correction of the sensor is performed based on the detection information of the previous sensor, and the deeper the physiological channel, such as the bronchial tree, is, the less likely to cause interference due to the influence of the physiological reaction (e.g., the influence of respiration), the smaller the interference received by the sensor located at the front is, and the closer the sensor located at the front is to the upper lobe of the lung, such as the bronchial tree, the less the interference received by the respiration. Furthermore, the front sensor is used for correcting and compensating the rear sensor, so that the influence of interference on the detection result can be eliminated or reduced, and the accuracy of the detection information is improved.
Optionally, where m is k-1, the detection information of at least some of the sensors is sequentially modified along the target sequence.
According to the scheme, correction can be carried out on the basis of accurate detection information every time.
Optionally, the predicted detection information includes position information of a predicted position of the kth sensor, and a distance between the predicted position and a position represented by the detection information of the mth sensor is matched with interval length information between the kth sensor and the mth sensor.
Optionally, the detection information further represents the posture of the catheter position where the corresponding sensor is located;
correcting actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain corrected detection information of the kth sensor, and the method comprises the following steps:
determining a corresponding extension line according to the actual detection information or the corrected detection information of the mth sensor, wherein the position of the extension line is matched with the position represented by the corresponding detection information, and the extension direction of the extension line is matched with the posture represented by the corresponding detection information;
and determining the predicted position according to the extension line and the interval length information between the kth sensor and the mth sensor.
In each scheme, the position and the posture of the mth sensor can be fully considered in the position prediction of the kth sensor, so that the position and the posture of the mth sensor can be accurately and fully considered in the correction result, and the correction accuracy is improved.
Optionally, the predicted detection information further includes attitude information of a predicted attitude of the kth sensor, and the predicted attitude is matched to the attitude of the mth sensor.
In the above scheme, the attitude of the mth sensor can be fully considered in the attitude prediction of the kth sensor, so that the correction accuracy is improved, and further, the position corrected based on the attitude can be more accurate.
Optionally, the correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain the corrected detection information of the kth sensor specifically includes:
correcting actual detection information of the kth sensor according to the predicted detection information of the kth sensor and set correction reference information;
wherein the revised reference information includes: the first correction reference information represents the matching degree of the detection information corrected by the corresponding sensor and the prediction detection information, and/or the second correction reference information represents the matching degree of the detection information corrected by the corresponding sensor and the actual detection information.
Optionally, the modified reference information for the sensors in different orders is different, and:
in the N sensors, the closer to the entrance of the physiological channel, the lower the matching degree represented by the first modified reference information of the sensor, and the higher the matching degree represented by the second modified reference information.
In the above scheme, the closer to the entrance of the physiological channel, the less interference the sensor is subjected to (for example, the closer to the upper lobe of the lung, the less respiratory interference), and correspondingly, the correction reference information of different sensors in the above scheme can more accurately match the sequence in which the sensors are located, so as to more accurately match the size distribution of the interference, and ensure the accuracy of correction.
Optionally, the correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information specifically includes:
according to the corrected reference information, carrying out weighted summation on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain the corrected detection information of the kth sensor; the first modified reference information is a first weighted value corresponding to the predicted detection information, and the second modified reference information is a second weighted value corresponding to the actual detection information.
In the above scheme, a quantifiable processing means is provided for correcting the detection information, and based on a weighted summation mode, the prediction detection information and the actual detection information can be effectively considered based on the weighted value, and meanwhile, the relative simplification of the algorithm can be guaranteed.
Optionally, according to the corrected reference information, performing weighted summation on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain the corrected detection information of the kth sensor, which specifically includes:
correcting actual monitoring information of the kth sensor based on the following formula:
(xk′,yk′,zk,′αk′,βk′,γk′)=(1-λ)(xk,yk,zk,αk,βk,γk)+λ(xp,yp,zp,αp,βp,γp)
wherein:
(xk′,yk′,zk,′αk′,βk′,γk') characterized by the kth sensor after correctionThe detection information of (1);
xk' characterizing coordinates in the x-axis direction in the corrected detection information of the k-th sensor;
yk' characterizing coordinates in a y-axis direction in the corrected detection information of the k-th sensor;
zk' characterizing coordinates in a z-axis direction in the corrected detection information of the kth sensor;
αk' characterizing a rotation angle around an x-axis in the corrected detection information of the k-th sensor;
βk' characterizing a rotation angle around the y-axis in the corrected detection information of the k-th sensor;
γk' characterizing a rotation angle around a z-axis in the corrected detection information of the k-th sensor;
(xk,yk,zk,αk,βk,γk) Actual monitoring information of the kth sensor is characterized;
xkrepresenting the coordinate in the x-axis direction in the actual detection information of the kth sensor;
ykrepresenting the coordinate in the y-axis direction in the actual detection information of the kth sensor;
zkrepresenting the coordinate in the z-axis direction in the actual detection information of the kth sensor;
αkthe rotation angle around the x axis in the actual detection information of the k sensor is characterized;
βkthe rotation angle around the y axis in the actual detection information of the k sensor is characterized;
γkthe rotation angle around the z-axis in the actual detection information of the kth sensor is characterized;
(xp,yp,zp,αp,βp,γp) Predictive sensing information characterizing the kth sensor;
xpcharacterize the k < th > transmissionCoordinates in the x-axis direction in the predicted detection information of the sensor;
ypthe coordinates in the y-axis direction in the predicted detection information of the kth sensor are characterized;
zpcoordinates in the z-axis direction in the predicted detection information characterizing the kth sensor;
αpa rotation angle about an x-axis in the predicted sensed information characterizing the kth sensor;
βpa rotation angle about the y-axis in the predicted sensed information characterizing the kth sensor;
γpa rotation angle about the z-axis in the predicted sensed information characterizing the kth sensor;
λ is the first weighting value;
1- λ is the second weighting value.
Optionally, the navigation processing method further includes:
before the catheter enters a physiological channel, establishing the model according to the scanning data of the physiological channel;
and determining a navigation path according to the model and the marked target point, and using the navigation path as a movement basis of the catheter after entering the physiological channel.
In the above alternative, since the model is established according to the scan data, it can be effectively ensured that the model can accurately reflect the form of the physiological channel, and simultaneously, the navigation path determined based on the model can accurately guide the catheter to travel in the physiological channel.
Optionally, the physiological channel is a bronchial tree.
According to a second aspect of the present invention, a navigation processing device in a physiological channel is provided, wherein a catheter is adopted, the catheter is provided with N sensors and an image acquisition part, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and N is greater than or equal to 2;
a navigation processing apparatus comprising:
the acquisition module is used for acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part after the catheter enters a physiological channel, wherein the detection information represents the position of the catheter where the corresponding sensor is located;
the slice extraction module is used for extracting a plurality of simulation slice images from the model of the physiological channel according to the detection information;
and the positioning module is used for determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
According to a third aspect of the invention, there is provided an electronic device comprising a processor and a memory,
the memory is used for storing codes;
the processor is configured to execute the codes in the memory to implement the navigation processing method according to the first aspect and the optional aspects thereof.
According to a fourth aspect of the present invention, there is provided a storage medium having stored thereon a computer program that, when executed by a processor, implements the navigation processing method relating to the first aspect and its alternatives.
According to a fifth aspect of the present invention, there is provided a navigation system within a physiological channel, comprising: the device comprises a catheter, N sensors, an image acquisition part and a data processing part, wherein the N sensors and the image acquisition part are arranged on the catheter, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and the data processing part can be directly or indirectly communicated with the N sensors and the image acquisition part;
the data processing section is configured to execute the navigation processing method according to the first aspect and the optional aspects thereof.
Optionally, the sensor is a magnetic navigation sensor.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a navigation system within a physiologic tunnel in accordance with an embodiment of the present invention;
FIG. 2 is a schematic geometric diagram of a catheter, an image capture section and a sensor in accordance with an embodiment of the present invention;
FIG. 3 is a first flowchart illustrating a method for navigation processing within a physiological channel according to an embodiment of the present invention;
FIG. 4 is a schematic representation of a model in one embodiment of the invention;
FIG. 5 is a flowchart illustrating step S22 according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating step S221 according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a model and a target channel range in accordance with an embodiment of the present invention;
FIG. 8 is a flowchart illustrating step S23 according to an embodiment of the present invention;
FIG. 9 is a second flowchart illustrating a method for navigation processing within a physiological channel according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a model and navigation paths in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram of the principle of respiratory disturbance in an embodiment of the present invention;
FIG. 12 is a flowchart illustrating step S26 according to an embodiment of the present invention;
FIG. 13 is a flowchart illustrating step S261 according to an embodiment of the present invention;
FIG. 14 is a first block diagram illustrating the program modules of the navigation processing device within a physiological channel in accordance with an embodiment of the present invention;
FIG. 15 is a second flowchart of the navigation processing device within a physiological channel according to an embodiment of the present invention;
fig. 16 is a schematic configuration diagram of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The navigation processing method and apparatus in the physiological channel provided by the embodiment of the present invention can be applied to an execution subject (for example, a device or a combination of devices) having a data processing capability, and can be specifically understood as the electronic device 50 and the data processing unit 103 which are referred to later. At least part of the steps of the navigation processing method can be realized based on LungPoint software.
Referring to fig. 1, the navigation system in the physiological channel may include a catheter 101 and N sensors 102, where N is greater than or equal to 2, for example, 5, 6, 7, 8, 9, 10, etc., and the required number may be arbitrarily selected according to the requirement of medical activity, the type and form of the physiological channel, the detection accuracy of the sensors, etc., where N is greater than or equal to 2.
Meanwhile, the navigation system further includes: the image capturing unit 104 may be a part of the endoscopic apparatus, or may be an image capturing unit provided separately from the existing endoscopic apparatus, for example. The image capturing part is provided at the catheter 101, which may be provided at the distal end of the catheter 101 (or may be understood as the end remote from the entrance of the physiological channel), without excluding the possibility of being provided at other locations.
The endoscopic device may be a component or a combination of components capable of performing an endoscope in a physiological channel, and the endoscopic device may further include at least one of an illuminating component, a packaging material, and the like, in addition to the image collecting portion, but is not limited thereto, and may be a configuration in which the components are assembled and packaged together. The endoscopic device may be provided at the distal end of the catheter 101 or at a position other than the distal end.
The catheter 201, which is understood to be a structure provided with sensors and adapted to deliver N sensors into the physiological channel, may include, for example, a flexible tube, a rigid tube, a device for guiding the catheter, other devices for medical activities, and a circuit, and a structure for electrically connecting the sensor 102, the image capturing unit 104, and the outside.
The sensor 102 may be understood to be a sensor capable of detecting its own position, and when the sensor 102 is disposed on a catheter, the sensor may be understood to be capable of detecting the position of the catheter portion where the sensor 102 is disposed, and in some embodiments, the sensor may also detect its own attitude (i.e., detect the attitude of the catheter portion where the sensor 102 is disposed), and further, the detection information detected by the sensor may indicate the position (or the position and the attitude) of the catheter portion where the sensor 202 is disposed. In addition, the detection information that can be detected by the sensor is not limited to the position and the posture.
Any sensor in the art that can perform position (or position and attitude) detection does not depart from the scope of the embodiments of the present invention. In a further aspect, the sensor 102 may be a magnetic navigation sensor, an optical fiber sensor, a shape sensor, or the like, and any sensor may be used without departing from the scope of the embodiments of the present invention.
If the physiological channel is a bronchial tree, the image capturing unit and the entire catheter (or the entire endoscope apparatus and the catheter) may be understood as a bronchoscope. The positioning of the catheter may also be understood as the positioning of the bronchoscope.
In the embodiment of the present invention, referring to the geometric schematic diagram shown in fig. 2, N sensors 102 are sequentially distributed at different positions in the length direction of the conduit 101, and further, a section of conduit portion may be spaced between two adjacent sensors 102, the length of the spaced conduit portion may be uniform or non-uniform, in the example shown in fig. 2, the number of the sensors 102 is six.
The execution body mentioned above may communicate with the sensor and the image acquisition unit, and the communication may be realized in a wired manner or in a wireless manner.
In the embodiment of the present invention, referring to fig. 3, the navigation processing method includes:
s21: after the catheter enters a physiological channel, acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part;
s22: extracting a plurality of simulated slice images from a model of the physiological channel according to the detection information;
s23: and determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
The channel real image in step S21 may be any image collected by the image collecting unit, which may be collected in real time, or collected at certain time intervals, collected during the whole process of entering the physiological channel, or collected at specific time, and collected in any manner without departing from the scope of the embodiments of the present invention.
The model of the physiological channel in step S22 may be, for example, the model 301 shown in fig. 4 (for convenience of disclosure, fig. 4 is a simplified model, but the actual model may not be). It may be any model describing a physiological channel.
The physiological channel may be any physiological channel of any human or animal body, such as a bronchial tree (which can be understood by referring to the form of the model shown in fig. 4), and in other examples, the physiological channel may also be a channel of a urinary system, a channel of a digestive system, and so on. The physiologic tunnel may have multiple intersections (or bifurcations, as may be understood) therein.
The physiologic tunnel can be further classified and segmented based on bifurcation or other reference, for example, the bronchial tree can include left and right main bronchi, lobar bronchi, and further can be divided into upper lobes, middle lobes, lower lobes, etc., with about 24 branches from the human bronchi (level 1) to the alveoli. How to divide the corresponding physiological channels can be understood by referring to the common general knowledge in the field.
The simulated slice image in step S22 is: an image observed when the internal passage of the model is observed at a corresponding position of the model; specifically, the viewing angle for viewing the simulated slice image may be matched to the viewing angle for capturing the channel real image by the image capturing part, for example, the viewing angle along the axial direction of the physiological channel may be used.
In addition, in an actual implementation process, the simulated slice image of each position may be formed after the three-dimensional model is formed and rendered, or the required simulated slice image may be formed only when step S22 is performed.
In the scheme, a reliable basis which can reflect the form of the physiological channel can be provided for positioning by using the model of the physiological channel, and on the basis, a reliable, accurate and sufficient basis can be provided for positioning the catheter based on the channel real image and the simulated slice image. Meanwhile, the invention also extracts the simulation slice images based on the detection information of the sensor, thereby avoiding using all the simulation slice images of the model, further effectively reducing the data amount required to be processed by positioning, simplifying the processing flow and improving the processing efficiency through the registration of the local simulation slice images (namely a plurality of simulation slice images).
In one embodiment, step S22 may include:
s221: determining a target channel range in the model of the physiological channel according to the detection information;
s222: and extracting the plurality of simulated slice images according to the target channel range.
The target channel range is matched with the channel range where the image acquisition part is actually located; because the detection information reflects the position of the corresponding part of the catheter, and the image acquisition part is arranged on the catheter, the actual channel range of the image acquisition part can be directly or indirectly embodied by taking the detection information as the basis.
Where matching may refer to being the same, close, at least partially overlapping, etc. The channel range divided by the method can be divided based on any principle, taking the bronchial tree (and the model thereof) as an example, the method can refer to a certain level of the bronchial tree, also can refer to a certain level between two levels of the bronchial tree, also can refer to a certain section area of the certain level, a certain section area between two levels and the like, and in addition, the channel range can also be determined by adopting a self-defined division mode. It is not departing from the scope of the embodiments of the present invention as long as a target channel range can be defined in the model and matched with the channel range where the image capturing part is actually located.
Taking fig. 7 as an example, a target channel range 302 may be determined in the model 301.
In one aspect of step S222, simulated slice images (i.e., the plurality of simulated slice images) at positions within the target channel range may be extracted. In some embodiments, each position in the target channel range may be filtered (for example, the filtering may be performed based on the quality, the similarity, the distance between corresponding positions, and the like of each simulated slice image), so as to select the simulated slice images of some positions for extraction. In some embodiments, a larger channel range including the target channel range may be expanded based on the target channel range, and the simulated slice images may be extracted for each position within the larger channel range.
Further, referring to fig. 6, step S221 may include:
s2211: projecting positions represented by detection information of L sensors into a coordinate system of the model, and determining the positions of the L sensors in the coordinate system;
s2212: and determining the target channel range according to the positions of the L sensors in the coordinate system and the relative intervals between the L sensors and the image acquisition part.
Wherein, because each sensor and image acquisition portion all fix and locate the pipe, so the interval of arbitrary one sensor for image acquisition portion all can help confirming the position of image acquisition portion, and then, L wherein can be equal to 1, also can be greater than 1, promptly: l is more than or equal to 1 and less than or equal to N and is an integer.
The position represented by the detection information can be understood as the position of the sensor in a certain coordinate system, and in the above process, the position can be projected into the coordinate system of the model based on the known or calibrated mapping relation, and then the position of each sensor is located in the coordinate system of the model.
In the above alternative, since the target passage range is determined based on the position of the sensor, and the relative interval between the target passage range and the image capturing unit is relatively fixed, it can be ensured that the target passage range can accurately cover the actual position of the image capturing unit.
In one embodiment, the L sensors include a sensor of the N sensors adjacent to the image capturing section. For example: if the image acquisition part is arranged at one end of the catheter, L can be 2, and the L sensors can select a head sensor and a tail sensor.
In the above alternative, the sensor adjacent to the image capturing section may help to improve the accuracy of the target passage range (i.e., the actual position of the image capturing section can be covered more accurately).
In the above specific examples of step S221 and step S222:
taking the bronchial tree as an example, the aim is to: from the target pathway range, it is determined which segment of the virtual bronchial tree (i.e., the model) the current bronchoscope is at, rather than the most accurate location. For example, after step S21 or after the correction is performed in step S26, it is possible to determine which region the bronchoscope is in which level of the virtual bronchial tree (LMB, RMB, RB1-12, LB1-12, etc.) by taking the 6D degree of freedom of the first sensor as a starting point and the 6D degree of freedom of the last sensor as an ending point.
Then, after determining the number of stages of the bronchoscope in the virtual bronchial tree, local discrete sampling is performed on the virtual bronchial tree (i.e., the model), and a 2D image dataset is obtained in consideration of different xyz positions and different α β γ angles, where the 2D image is a slice of the bronchial tree at a certain number of stages, that is, the simulated slice image extracted in step S222.
In one embodiment, referring to fig. 8, step S23 may include:
s231: determining a target image in the plurality of simulated slice images according to the similarity between the channel real image and the plurality of simulated slice images;
s232: and determining the position of the catheter in the physiological channel according to the position of the target image in the model.
The process of calculating the similarity may be implemented by any existing or improved algorithm, and in one example, the COS distance, the euclidean distance, and the like between the images may be calculated as information representing the similarity.
In one scheme, one or more images with the highest similarity may be selected as the target image, and in other schemes, a plurality of images with the highest similarity may be subjected to screening (for example, screening is performed by referring to factors other than the similarity), so as to obtain one target image.
In one specific example, the COS distance can be used as a measure of similarity for local registration between the channel real image and the simulated slice image. Therefore, a simulated slice image which is most similar to the real image is obtained, the accurate position of the image acquisition part is calculated, 6-degree-of-freedom data (namely, the position) of a catheter and the like (such as a bronchoscope) is obtained, further, the data can be updated to be used as a starting point, the target point position is used as an end point, and real-time navigation is carried out, wherein due to the fact that the method is achieved based on local registration, all 2D slices (namely all simulated slice images) of the model of the virtual bronchial tree do not need to be searched globally, complexity is low, and the method meets real-.
In the above alternative, based on the similarity between the images, the target image can be accurately determined, thereby ensuring the accuracy of catheter positioning.
In one embodiment, referring to fig. 9, before step S21, the method further includes:
s24: before the catheter enters a physiological channel, establishing the model according to the scanning data of the physiological channel;
s25: and determining a navigation path according to the model and the marked target point, and using the navigation path as a movement basis of the catheter after entering the physiological channel.
The scan data may be, for example, CT scan data, and may be combined with other calibrated or measured data when establishing the model, as long as the model corresponding to the physiological channel can be formed, without departing from the scope of the embodiments of the present invention. The navigation path therein can be understood with reference to the navigation path 303 in fig. 10.
Step S25 can be implemented automatically or in combination with human, and the target point can be calibrated by the doctor (or other personnel) in the model.
In an example, the determination of the model may be implemented by using a CT imaging technology and a 3D rendering technology, specifically, 3D CT data (i.e., scan data of a physiological channel) may be obtained by CT scanning, and then a virtual bronchial tree (i.e., a model) may be obtained by using the 3D rendering technology, and then, after the position of the target point in the virtual bronchial tree is determined, a corresponding path plan may be given, so as to obtain a navigation path.
In some embodiments, after step S23, guiding information may be fed back to the outside based on the navigation path and the position determined in step S23, so as to inform the staff member, the position of the catheter (e.g. bronchoscope), and the position of the terminal point in real time, and give automatic navigation path guidance, including but not limited to: the technology can automatically position, has low requirement on the skill of workers, and has quick and accurate positioning process.
In the above alternative, since the model is established according to the scan data, it can be effectively ensured that the model can accurately reflect the form of the physiological channel, and simultaneously, the navigation path determined based on the model can accurately guide the catheter to travel in the physiological channel.
In one embodiment, the detection information of the sensor may be interfered due to the influence of physiological activities (e.g., respiration), so that the detection result is difficult to match with the model, and therefore, the detection information needs to be corrected before step S22.
In the following, the need for correcting the detected information is explained by taking the influence of the respiratory disturbance on the bronchial tree as an example, first with reference to fig. 11.
Because of respiratory disturbance, the lung contracts when exhaling and expands when inhaling, and the shape of the lung is greatly different from that of the virtual bronchial tree (i.e. model) rendered by preoperative CT, which can be understood as follows: the sensor's coordinate system and the virtual bronchial tree (i.e. model) coordinate system are not aligned. Resulting in a large deviation of the relative position (at which position of the bronchial tree) although the position coordinates of the sensors are accurate. As shown in fig. 11, the bronchial tree is contracted inward from a solid blue line to a dashed blue line in the figure. The catheter with the magnetic sensor also changes from the position of the solid line to the position of the broken line in the figure. Directly calculating the relative position of the virtual bronchial tree before the operation may cause erroneous judgment. The breathing disturbance can cause a change in the relative position.
Considering the respiratory model of the lung, the respiratory deformation of the lower lobe of the lung is larger than that of the middle lobe and the upper lobe of the lung, the sensors are required to be as different as possible from the same lobe (for example, some are in the lower lobe and some are in the middle lobe during navigation), and the number of the sensors is required to be larger than 2. Since the position distribution of the sensors is known, the geometric relationship is unchanged, the catheter length is unchanged, then: l1+ L0 ═ L2+ L0. That is, the distance between two sensors is determined, and the information (i.e., the detection information) of the 6D degree of freedom of the previous sensor can be used to predict the information (i.e., the detection information) of the 6D degree of freedom of the next sensor through geometric calculation, so as to achieve the effect of correction.
As the patient breathes, the catheter 101 and the sensor 102 change from the position corresponding to L2 to the position corresponding to L1 in the figure. Directly calculating the relative position of the virtual bronchial tree before the operation may cause erroneous judgment. The breathing disturbance can cause a change in the relative position. However, the sensor freedom can sense the change of the catheter, and the length of the catheter is not changed, so that the coordinates of the latter sensor can be corrected by the coordinates of the former sensor, because L1 is equal to L2.
On this basis, referring to fig. 9, step S22 may include:
s26: and correcting at least part of the actual detection information of the sensors according to the actual detection information of the N sensors and the interval length information among the sensors to obtain corrected detection information.
Further, by step S26, it is possible to make: the plurality of analog slice images to be extracted in step S22 are extracted based on the corrected detection information.
Wherein the interval length information is indicative of the length of the catheter section between the sensors in said catheter. Which may include the length of the portion of the conduit between adjacent sensors, and may also include the length of the portion of the conduit between non-adjacent sensors.
In the scheme, the correction of the detection information is realized, and the correction result can be restrained by the distribution positions of the sensors due to the combination of the interval length information among the sensors in the correction process, so that the accuracy of the corrected detection information is improved.
Further, step S26 may specifically include:
s260: and for any k-th sensor, correcting the actual detection information of the k-th sensor according to the detection information of one or more sensors between the k-th sensor and the physiological channel inlet and the interval length information between the k-th sensor and the k-th sensor, wherein k is greater than or equal to 2, the k-th sensor refers to the k-th sensors which are sequentially distributed along a target sequence in the N sensors, and the target sequence is opposite to the sequence of the sensors sequentially entering the physiological channel, namely the sequence of the sensors in the direction away from the physiological channel inlet.
Furthermore, k may take different values (e.g. 2, 3, 4, … … continuous or discontinuous) one by one, so that step 260 is performed one by one for the sensors of the at least part of sensors.
Still further, referring to fig. 12, step S260 may specifically include:
s261: predicting at least part of detection information of the kth sensor according to actual detection information or corrected detection information of the mth sensor and interval length information between the kth sensor and the mth sensor to obtain predicted detection information of the kth sensor; wherein m is less than k;
s262: and correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain the corrected detection information of the kth sensor.
In the above embodiments, the correction of the sensor is performed based on the detection information of the front sensor, and the deeper the physiological channel, such as the bronchial tree, is, the less likely to cause interference due to the influence of the physiological reaction (e.g., the influence of respiration), the smaller the interference received by the front sensor is, and the closer the front sensor is to the upper lobe of the lung, such as the bronchial tree, the less the interference received by the respiration. Furthermore, the front sensor is used for correcting and compensating the rear sensor, so that the influence of interference on the detection result can be eliminated or reduced, and the accuracy of the detection information is improved.
In other words, in consideration of the lung breathing, the detection information (e.g., coordinates, angles) of the front sensor is less affected than the detection information (e.g., coordinates, angles) of the rear sensor, and the correction of the detection information (e.g., correction of coordinates, angles) enables each sensor to give accurate detection information.
In a specific example, where m is k-1, the detection information of at least some of the sensors is sequentially corrected along the target sequence. Further, the detection information of the sensors may be corrected one by one from front to back, and the detection information of the previous sensor in the order of the target among the adjacent sensors may be used to correct the next sensor. According to the scheme, correction can be carried out on the basis of accurate detection information in each correction.
In other examples, m may not be equal to k-1, and the sensor for correcting the detection information of the kth sensor is not limited to one.
The front sensor and the front sensor refer to the front and front sensors along the target sequence; the rear sensors, refer to rear, rear sensors in the order of the target.
In one embodiment, the predicted detection information includes position information of a predicted position of the kth sensor, and a distance between the predicted position and a position represented by the detection information of the mth sensor is matched with interval length information between the kth sensor and the mth sensor. The distance and the interval may be matched in the same way or in a similar way (for example, smaller than a certain distance threshold). Therefore, the constraint of the position and interval length information of the mth sensor on the predicted position is realized, and the prediction result can be accurately matched with the position and interval length.
In addition to distance, the attitude of the mth sensor may also place constraints on the predicted position.
Therefore, referring to fig. 13, step S261 may include:
s2611: determining a corresponding extension line according to the actual detection information or the corrected detection information of the mth sensor;
s2612: and determining the predicted position according to the extension line and the interval length information between the kth sensor and the mth sensor.
The position of the extension line matches the position represented by the corresponding detection information, for example, the extension line may pass through the position in the detection information of the m-th sensor (for example, the coordinates of x, y, and z in the detection information), and the extension direction of the extension line matches the posture represented by the corresponding detection information (for example, the extension direction matches α, β, and γ in the detection information).
Since the attitude of the sensor is actually the attitude of the catheter portion where it is located, which varies with the curvature of the catheter, the extension direction may specifically match the tangential direction of the catheter portion where the sensor is located, and point to the side of the next sensor in the target sequence, for example: the extending direction may be the same as, similar to (the angle difference is less than a certain threshold) the tangential direction, or a specified angle with the tangential direction. In the above scheme, the constraint of the posture of the mth sensor on the position of the kth sensor is fully considered, and the prediction result can be accurately matched with the posture of the mth sensor (namely matched with the bending condition of the corresponding catheter part).
Furthermore, the position and the posture of the mth sensor can be fully considered in the position prediction of the kth sensor, so that the position and the posture of the mth sensor can be accurately and fully considered in the correction result, and the correction accuracy is improved.
In one embodiment, the predicted detection information further includes attitude information of a predicted attitude of the kth sensor, the predicted attitude matching the attitude of the mth sensor. It can be seen that the attitude prediction of the kth sensor is mainly constrained to the attitude of the mth sensor.
Furthermore, in the above scheme, the attitude of the mth sensor can be fully considered in the attitude prediction of the kth sensor, so that the correction accuracy is improved.
In one embodiment, step S262 may include:
correcting actual detection information of the kth sensor according to the predicted detection information of the kth sensor and set correction reference information;
wherein the revised reference information includes: the first correction reference information represents the matching degree of the detection information corrected by the corresponding sensor and the prediction detection information, and/or the second correction reference information represents the matching degree of the detection information corrected by the corresponding sensor and the actual detection information.
The first modified reference information and the second modified reference information may be any information capable of representing the corresponding matching degree, and the content of the modified reference information may be arbitrarily changed based on different modification algorithms, without departing from the scope of the embodiment of the present invention.
In one example, the modified reference information for different orders of sensors is different, and:
in the N sensors, the closer to the entrance of the physiological channel, the lower the matching degree represented by the first modified reference information of the sensor, and the higher the matching degree represented by the second modified reference information.
In the above scheme, the closer to the entrance of the physiological channel, the less interference the sensor is subjected to (for example, the closer to the upper lobe of the lung, the less respiratory interference), and correspondingly, the correction reference information of different sensors in the above scheme can more accurately match the sequence in which the sensors are located, so as to more accurately match the size distribution of the interference, and ensure the accuracy of correction.
In addition, the degree of matching between adjacent sensors may vary in the same magnitude (e.g., the first weighting value of each sensor along the target sequence may vary in an equal difference manner, and the second weighting value of each sensor may also vary in an equal difference manner), or may vary in the same magnitude (e.g., the difference between the first weighting value and the second weighting value of each adjacent sensor may vary). The magnitude of the change in the degree of matching between adjacent sensors may also be related to the length of the separation between the sensors, e.g., the greater the separation distance, the greater the magnitude of the change in the degree of matching. No matter how the specifically quantized correction parameter information changes, the scope of the above scheme is not deviated.
In a further aspect, the correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information specifically includes:
and according to the corrected reference information, carrying out weighted summation on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain the corrected detection information of the kth sensor.
The first modified reference information is a first weighted value corresponding to the predicted detection information, and the second modified reference information is a second weighted value corresponding to the actual detection information.
In the above scheme, a quantifiable processing means is provided for correcting the detection information, and based on a weighted summation mode, the prediction detection information and the actual detection information can be effectively considered based on the weighted value, and meanwhile, the relative simplification of the algorithm can be guaranteed.
In one example, the sum of the first weighted value and the second weighted value is 1, and the value of the first weighted value is less than or equal to 0.5.
In addition, in some examples, if other factors are also considered in the correction, the weighted value may further include other weighted values corresponding to other factors.
For example, for the kth sensor, in addition to the detection information of the mth sensor, the detection information of the qth sensor (q is smaller than k and is not equal to m) may be combined, the past detection information of the kth sensor (for example, the detection information at the previous time) may be combined, or the detection information that is not corrected by the pth sensor (p is greater than k) may be combined, and in this case, the sum of the first weighting value and the second weighting value may be smaller than 1.
The data of six degrees of freedom of the sensor in the three-dimensional space are as follows: the coordinates in the x-axis direction, the coordinates in the y-axis direction, the coordinates in the z-axis direction, the rotation angle around the x-axis, the rotation angle around the y-axis, and the rotation angle around the z-axis. The data of the six degrees of freedom can be understood as the detection information.
Taking seven sensors as an example, the x-axis coordinates of the seven sensors are x respectively1,x2,x3,x4,x5,x6,x7(ii) a The y-axis coordinates are respectively y1,y2,y3,y4,y5,y6,y7(ii) a z-axis coordinates are respectively z1,z2,z3,z4,z5,z6,z7(ii) a Three rotation angles are respectively alpha1,α2,α3,α4,α5,α6,α7;β1,β2,β3,β4,β5,β6,β7;γ1,γ2,γ3,γ4,γ5,γ6,γ7
Further, the actual monitoring information of the k-th sensor may be corrected based on the following formula:
(xk′,yk′,zk,′αk′,βk′,γk′)=(1-λ)(xk,yk,zk,αk,βk,γk)+λ(xp,yp,zp,αp,βp,γp)
wherein:
(xk′,yk′,zk,′αk′,βk′,γk') characterizes the corrected detection information of the kth sensor;
xk' characterizing coordinates in the x-axis direction in the corrected detection information of the k-th sensor;
yk' characterizing coordinates in a y-axis direction in the corrected detection information of the k-th sensor;
zk' characterizing coordinates in a z-axis direction in the corrected detection information of the kth sensor;
αk' characterizing a rotation angle around an x-axis in the corrected detection information of the k-th sensor;
βk' characterizing a rotation angle around the y-axis in the corrected detection information of the k-th sensor;
γk' characterizing a rotation angle around a z-axis in the corrected detection information of the k-th sensor;
(xk,yk,zk,αk,βk,γk) Actual monitoring information of the kth sensor is characterized;
xkrepresenting the coordinate in the x-axis direction in the actual detection information of the kth sensor;
ykrepresenting the coordinate in the y-axis direction in the actual detection information of the kth sensor;
zkrepresenting the coordinate in the z-axis direction in the actual detection information of the kth sensor;
αkthe rotation angle around the x axis in the actual detection information of the k sensor is characterized;
βkthe rotation angle around the y axis in the actual detection information of the k sensor is characterized;
γkthe rotation angle around the z-axis in the actual detection information of the kth sensor is characterized;
(xp,yp,zp,αp,βp,γp) Predictive sensing information characterizing the kth sensor;
xpcoordinates in the x-axis direction in the predicted detection information of the kth sensor are characterized;
ypthe coordinates in the y-axis direction in the predicted detection information of the kth sensor are characterized;
zpcoordinates in the z-axis direction in the predicted detection information characterizing the kth sensor;
αpa rotation angle about an x-axis in the predicted sensed information characterizing the kth sensor;
βpa rotation angle about the y-axis in the predicted sensed information characterizing the kth sensor;
γpa rotation angle about the z-axis in the predicted sensed information characterizing the kth sensor;
λ is the first weighting value;
1- λ is the second weighting value.
It can be seen that considering the respiratory model of the lung, the respiratory deformation of the lower lung lobe is larger than the middle lung lobe and the upper lung lobe, and the noise epsilon increases from top to bottom. Based on this, the above scheme proposes a sequential correction method, which uses the coordinates and angles (i.e. detection information) of the front sensor (closer to the upper lobe of the lung and less disturbed by breathing) to perform breathing compensation on the detection information of the rear sensor, so as to obtain more accurate coordinates and angles. And corrects the coordinates and the angle by calculating the distance (i.e., the interval length information) and giving a weight embodied by the first weight value λ and the second weight value 1- λ.
In addition, because the information of six degrees of freedom of the k-1 th sensor and the interval length information between the k-1 th sensor and the k-th sensor behind the k-th sensor are known, under the constraints of the information and the interval length information, the detection information of the k-th sensor can be predicted by adopting any existing or improved prediction algorithm in the field to obtain corresponding predicted detection information, and partial scheme can be combined with other information for prediction. The predicted information may be all the detection information (for example, data of six degrees of freedom) of the kth sensor, or may be partial detection information (for example, x-axis coordinate, y-axis coordinate, and z-axis coordinate) of the kth sensor. No matter which detection information is predicted, prediction in any way is not departing from the scope of the above embodiments.
By integrating the specific schemes of the steps S21 to S26, a CT scan may be performed on a patient before an operation to generate a virtual bronchial tree (i.e., a model), considering that the sensor may be inaccurately positioned due to respiratory interference, a plurality of sensors may be used to obtain coarse-grained local positioning, a bronchoscope (i.e., an image acquisition portion) of the sensor is locked in a certain region of the series of the virtual bronchial tree, and then local registration is performed through a channel real image and a 2D slice image (i.e., a simulated slice image) in the series of the bronchial tree to obtain a fine-grained position, so that alignment of a sensor coordinate system and a model coordinate system is achieved, thereby achieving accurate positioning.
Referring to fig. 14, the navigation processing device 400 in the physiological channel includes:
an obtaining module 401, configured to obtain actual detection information of the N sensors and a channel real image acquired by the image acquiring unit after the catheter enters the physiological channel, where the detection information represents a position of a catheter where the corresponding sensor is located;
a slice extraction module 402, configured to extract a plurality of simulated slice images from the model of the physiological channel according to the detection information; the simulated slice image is as follows: an image observed when the internal passage of the model is observed at a corresponding position of the model;
a positioning module 403, configured to determine a position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
Referring to fig. 15, the navigation processing device 400 in the physiological channel further includes:
a correcting module 406, configured to correct, according to the actual detection information of the N sensors and the information of the interval lengths between the sensors, at least part of the actual detection information of the sensors to obtain corrected detection information, so that: the plurality of simulated slice images are extracted from the modified detection information, and the interval length information is indicative of a length of a portion of the catheter between the sensors in the catheter.
Referring to fig. 15, the navigation processing device 400 in the physiological channel further includes:
a modeling module 404, configured to establish the model according to the scan data of the physiological channel before the catheter enters the physiological channel;
a path determining module 405, configured to determine a navigation path according to the model and the marked target point, so as to use the navigation path as a basis for movement of the catheter after entering the physiological channel.
Referring to fig. 16, an electronic device 50 is provided, including:
a processor 51; and the number of the first and second groups,
a memory 52 for storing executable instructions of the processor;
wherein the processor 51 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 51 is capable of communicating with the memory 52 via a bus 53.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (23)

1. A navigation processing method in a physiological channel is characterized in that a catheter is adopted, the catheter is provided with N sensors and an image acquisition part, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and N is more than or equal to 2;
the navigation processing method comprises the following steps:
after the catheter enters a physiological channel, acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part, wherein the detection information represents the position of the catheter position corresponding to the sensor;
extracting a plurality of simulated slice images from a model of the physiological channel according to the detection information; the simulated slice image is as follows: an image observed when the internal passage of the model is observed at a corresponding position of the model;
and determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
2. The navigation processing method of claim 1,
extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information, which specifically comprises:
determining a target channel range in the model of the physiological channel according to the detection information; the target channel range is matched with the channel range where the image acquisition part is actually located;
and extracting the plurality of simulated slice images according to the target channel range.
3. The navigation processing method of claim 2,
determining a target channel range in the model of the physiological channel according to the detection information, specifically comprising:
projecting positions represented by detection information of L sensors into a coordinate system of the model, and determining the positions of the L sensors in the coordinate system; wherein L is more than or equal to 1 and less than or equal to N;
and determining the target channel range according to the positions of the L sensors in the coordinate system and the relative intervals between the L sensors and the image acquisition part.
4. The navigation processing method of claim 3, wherein the L sensors include a sensor of the N sensors that is adjacent to the image acquisition portion.
5. The navigation processing method according to any one of claims 1 to 4, wherein determining the position of the catheter in the physiological channel from the channel real image and the plurality of simulated slice images specifically comprises:
determining a target image in the plurality of simulated slice images according to the similarity between the channel real image and the plurality of simulated slice images;
and determining the position of the catheter in the physiological channel according to the position of the target image in the model.
6. The navigation processing method according to any one of claims 1 to 4,
before extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information, the method further comprises the following steps:
according to the actual detection information of the N sensors and the interval length information between the sensors, correcting at least part of the actual detection information of the sensors to obtain corrected detection information, so that: the plurality of simulated slice images are extracted from the modified detection information, and the interval length information is indicative of a length of a portion of the catheter between the sensors in the catheter.
7. The navigation processing method of claim 6,
according to the actual detection information of the N sensors and the interval length information between the sensors, correcting at least part of the actual detection information of the sensors to obtain corrected detection information, specifically including:
and for any k-th sensor, correcting the actual detection information of the k-th sensor according to the detection information of one or more sensors between the k-th sensor and the physiological channel inlet and the interval length information between the k-th sensor and the k-th sensor, wherein k is greater than or equal to 2, the k-th sensor refers to the k-th sensors distributed in sequence along a target sequence in the N sensors, and the target sequence is opposite to the sequence of the sensors entering the physiological channel in sequence.
8. The navigation processing method of claim 7,
according to the detection information of one or more sensors between the kth sensor and the physiological channel inlet and the interval length information between the kth sensor and the kth sensor, correcting the actual detection information of the kth sensor to obtain the corrected detection information of the kth sensor, specifically comprising:
predicting at least part of detection information of the kth sensor according to actual detection information or corrected detection information of the mth sensor and interval length information between the kth sensor and the mth sensor to obtain predicted detection information of the kth sensor; wherein m is less than k;
and correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain the corrected detection information of the kth sensor.
9. The method as claimed in claim 8, wherein m-k-1, the detection information of at least some of the sensors is sequentially modified along the target sequence.
10. The navigation processing method according to claim 8, wherein the predicted detection information includes position information of a predicted position of the kth sensor, and a distance between the predicted position and a position characterized by the detection information of the mth sensor matches a separation length information between the kth sensor and the mth sensor.
11. The navigation processing method of claim 10, wherein the detection information further characterizes a pose of a catheter site at which the corresponding sensor is located;
correcting actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain corrected detection information of the kth sensor, and the method comprises the following steps:
determining a corresponding extension line according to the actual detection information or the corrected detection information of the mth sensor, wherein the position of the extension line is matched with the position represented by the corresponding detection information, and the extension direction of the extension line is matched with the posture represented by the corresponding detection information;
and determining the predicted position according to the extension line and the interval length information between the kth sensor and the mth sensor.
12. The navigation processing method according to claim 8, wherein the predicted detection information further includes attitude information of a predicted attitude of the kth sensor, the predicted attitude matching the attitude of the mth sensor.
13. The navigation processing method of claim 8,
according to the predicted detection information of the kth sensor, correcting the actual detection information of the kth sensor to obtain the corrected detection information of the kth sensor, specifically comprising:
correcting actual detection information of the kth sensor according to the predicted detection information of the kth sensor and set correction reference information;
wherein the revised reference information includes: the first correction reference information represents the matching degree of the detection information corrected by the corresponding sensor and the prediction detection information, and/or the second correction reference information represents the matching degree of the detection information corrected by the corresponding sensor and the actual detection information.
14. The navigation processing method of claim 13, wherein the modified reference information for the sensors in different orders is different, and:
in the N sensors, the closer to the entrance of the physiological channel, the lower the matching degree represented by the first modified reference information of the sensor, and the higher the matching degree represented by the second modified reference information.
15. The navigation processing method of claim 13,
correcting actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information, and specifically comprises the following steps:
according to the corrected reference information, carrying out weighted summation on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain the corrected detection information of the kth sensor; the first modified reference information is a first weighted value corresponding to the predicted detection information, and the second modified reference information is a second weighted value corresponding to the actual detection information.
16. The navigation processing method of claim 15,
according to the corrected reference information, performing weighted summation on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain the corrected detection information of the kth sensor, which specifically includes:
correcting actual monitoring information of the kth sensor based on the following formula:
(xk′,yk′,zk,′αk′,βk′,γk′)=(1-λ)(xk,yk,zkkkk)+λ(xp,yp,zpp,βpp)
wherein:
(xk′,yk′,zk,′αk′,βk′,γk') characterizes the corrected detection information of the kth sensor;
xk' characterizing coordinates in the x-axis direction in the corrected detection information of the k-th sensor;
yk' characterizing coordinates in a y-axis direction in the corrected detection information of the k-th sensor;
zk' characterizing coordinates in a z-axis direction in the corrected detection information of the kth sensor;
αk' characterizing a rotation angle around an x-axis in the corrected detection information of the k-th sensor;
βk' characterizing a rotation angle around the y-axis in the corrected detection information of the k-th sensor;
γk' characterizing a rotation angle around a z-axis in the corrected detection information of the k-th sensor;
(xk,yk,zkkkk) Actual monitoring information of the kth sensor is characterized;
xkrepresenting the coordinate in the x-axis direction in the actual detection information of the kth sensor;
ykrepresenting the coordinate in the y-axis direction in the actual detection information of the kth sensor;
zkrepresenting the coordinate in the z-axis direction in the actual detection information of the kth sensor;
αkthe rotation angle around the x axis in the actual detection information of the k sensor is characterized;
βkthe rotation angle around the y axis in the actual detection information of the k sensor is characterized;
γkthe rotation angle around the z-axis in the actual detection information of the kth sensor is characterized;
(xp,yp,zpppp) Predictive sensing information characterizing the kth sensor;
xpcoordinates in the x-axis direction in the predicted detection information of the kth sensor are characterized;
ypthe coordinates in the y-axis direction in the predicted detection information of the kth sensor are characterized;
zpcharacterize the kthCoordinates in the z-axis direction in the predicted detection information of each sensor;
αpa rotation angle about an x-axis in the predicted sensed information characterizing the kth sensor;
βpa rotation angle about the y-axis in the predicted sensed information characterizing the kth sensor;
γpa rotation angle about the z-axis in the predicted sensed information characterizing the kth sensor;
λ is the first weighting value;
1- λ is the second weighting value.
17. The navigation processing method according to any one of claims 1 to 4, further comprising:
before the catheter enters a physiological channel, establishing the model according to the scanning data of the physiological channel;
and determining a navigation path according to the model and the marked target point, and using the navigation path as a movement basis of the catheter after entering the physiological channel.
18. The navigation processing method according to any one of claims 1 to 4, wherein the physiological pathway is a bronchial tree.
19. A navigation processing device in a physiological channel is characterized in that a catheter is adopted, the catheter is provided with N sensors and an image acquisition part, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and N is greater than or equal to 2;
a navigation processing apparatus comprising:
the acquisition module is used for acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part after the catheter enters a physiological channel, wherein the detection information represents the position of the catheter where the corresponding sensor is located;
the slice extraction module is used for extracting a plurality of simulation slice images from the model of the physiological channel according to the detection information;
and the positioning module is used for determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
20. An electronic device, comprising a processor and a memory,
the memory is used for storing codes;
the processor is configured to execute the codes in the memory to implement the navigation processing method of any one of claims 1 to 18.
21. A storage medium having stored thereon a computer program which, when executed by a processor, implements the navigation processing method of any one of claims 1 to 18.
22. A navigation system within a physiological passageway, comprising: the device comprises a catheter, N sensors, an image acquisition part and a data processing part, wherein the N sensors and the image acquisition part are arranged on the catheter, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and the data processing part can be directly or indirectly communicated with the N sensors and the image acquisition part;
the data processing section is configured to execute the navigation processing method according to any one of claims 1 to 18.
23. The navigation system within a physiological channel of claim 22, wherein said sensor is a magnetic navigation sensor.
CN202110407995.3A 2020-12-31 2021-04-15 Navigation processing method, device, system, equipment and medium in physiological channel Active CN113100943B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011637832 2020-12-31
CN2020116378326 2020-12-31

Publications (2)

Publication Number Publication Date
CN113100943A true CN113100943A (en) 2021-07-13
CN113100943B CN113100943B (en) 2023-05-23

Family

ID=76717458

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202110406710.4A Active CN113116524B (en) 2020-12-31 2021-04-15 Detection compensation method, device, navigation processing method, device and navigation system
CN202120775579.4U Active CN215192193U (en) 2020-12-31 2021-04-15 In-vivo navigation device, in-vivo navigation system and medical treatment system
CN202110407995.3A Active CN113100943B (en) 2020-12-31 2021-04-15 Navigation processing method, device, system, equipment and medium in physiological channel
CN202110408017.0A Active CN113116475B (en) 2020-12-31 2021-04-15 Transcatheter navigation processing method, device, medium, equipment and navigation system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202110406710.4A Active CN113116524B (en) 2020-12-31 2021-04-15 Detection compensation method, device, navigation processing method, device and navigation system
CN202120775579.4U Active CN215192193U (en) 2020-12-31 2021-04-15 In-vivo navigation device, in-vivo navigation system and medical treatment system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110408017.0A Active CN113116475B (en) 2020-12-31 2021-04-15 Transcatheter navigation processing method, device, medium, equipment and navigation system

Country Status (1)

Country Link
CN (4) CN113116524B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (en) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 Positioning processing method, device, equipment and medium based on magnetic sensor
CN116433874A (en) * 2021-12-31 2023-07-14 杭州堃博生物科技有限公司 Bronchoscope navigation method, device, equipment and storage medium
WO2023134040A1 (en) * 2022-01-13 2023-07-20 杭州堃博生物科技有限公司 Data processing part, processing apparatus, surgical system, and device and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179339A1 (en) * 2022-03-23 2023-09-28 上海微创微航机器人有限公司 Catheter shape and force sensing method, surgical navigation method, and interventional operation system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102487602A (en) * 2010-08-27 2012-06-06 奥林巴斯医疗株式会社 Endoscope shape detection device and method for detecting shape of insertion portion of endoscope
CN103648361A (en) * 2011-05-13 2014-03-19 直观外科手术操作公司 Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery
US20140187915A1 (en) * 2012-12-27 2014-07-03 General Electric Company Method and system for position orientation correction in navigation
WO2014141968A1 (en) * 2013-03-12 2014-09-18 オリンパスメディカルシステムズ株式会社 Endoscopic system
CN104306072A (en) * 2014-11-07 2015-01-28 刘弘毅 Medical navigation system and method
CN104540439A (en) * 2012-08-14 2015-04-22 直观外科手术操作公司 Systems and methods for registration of multiple vision systems
US20170065206A1 (en) * 2014-04-29 2017-03-09 Koninklijke Philips N.V. Device for determining a specific position of a catheter
CN108175502A (en) * 2017-11-29 2018-06-19 苏州朗开信通信息技术有限公司 A kind of bronchoscope electromagnetic navigation system
CN108451639A (en) * 2017-02-22 2018-08-28 柯惠有限合伙公司 Multi-data source for positioning and navigating is integrated
CN109124766A (en) * 2017-06-21 2019-01-04 韦伯斯特生物官能(以色列)有限公司 It is sensed using trace information with shape and is registrated to improve
WO2020070647A1 (en) * 2018-10-04 2020-04-09 Biosense Webster (Israel) Ltd. Computerized tomography (ct) image correction using position and direction (p&d) tracking assisted optical visualization

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006078678A2 (en) * 2005-01-18 2006-07-27 Traxtal Inc. Method and apparatus for guiding an instrument to a target in the lung
US20080249395A1 (en) * 2007-04-06 2008-10-09 Yehoshua Shachar Method and apparatus for controlling catheter positioning and orientation
EP2348982B1 (en) * 2008-12-03 2020-03-25 St. Jude Medical, Atrial Fibrillation Division, Inc. System for determining the positioin of the tip of a medical catheter within the body of a patient
EP2449954B1 (en) * 2010-05-31 2014-06-04 Olympus Medical Systems Corp. Endoscopic form detection device and form detecting method of insertion section of endoscope
US8403829B2 (en) * 2010-08-27 2013-03-26 Olympus Medical Systems Corp. Endoscopic form detection device and form detecting method of insertion section of endoscope
WO2013040498A1 (en) * 2011-09-16 2013-03-21 Translucent Medical, Inc. System and method for virtually tracking a surgical tool on a movable display
US10082395B2 (en) * 2012-10-03 2018-09-25 St. Jude Medical, Atrial Fibrillation Division, Inc. Scaling of electrical impedance-based navigation space using inter-electrode spacing
US10098566B2 (en) * 2013-09-06 2018-10-16 Covidien Lp System and method for lung visualization using ultrasound
JP2015181643A (en) * 2014-03-24 2015-10-22 オリンパス株式会社 Curved shape estimation system, tubular insert system, and method for estimating curved shape of curved member
WO2016018648A1 (en) * 2014-07-28 2016-02-04 Intuitive Surgical Operations, Inc. Systems and methods for planning multiple interventional procedures
US10512510B2 (en) * 2014-12-22 2019-12-24 Intuitive Surgical Operations, Inc. Flexible electromagnetic sensor
KR102542190B1 (en) * 2015-04-06 2023-06-12 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 System and method of registration compensation in image-guided surgery
WO2017030648A2 (en) * 2015-06-17 2017-02-23 The Charles Stark Draper Laboratory, Inc Systems and methods for determining shape and/or position
WO2017158397A1 (en) * 2016-03-13 2017-09-21 Synaptive Medical (Barbados) Inc. System and method for sensing tissue deformation
EP3463136B1 (en) * 2016-07-15 2020-12-02 St. Jude Medical, Cardiology Division, Inc. Methods and systems for generating smoothed images of an elongate medical device
KR102536940B1 (en) * 2016-12-28 2023-05-30 아우리스 헬스, 인코포레이티드 Device for Flexible Instrument Insertion
WO2018183727A1 (en) * 2017-03-31 2018-10-04 Auris Health, Inc. Robotic systems for navigation of luminal networks that compensate for physiological noise
US20190307511A1 (en) * 2018-04-10 2019-10-10 Biosense Webster (Israel) Ltd. Catheter localization using fiber optic shape sensing combined with current location
DE102018108643A1 (en) * 2018-04-11 2019-11-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. A position determining device for determining a position of an object within a tubular structure
CN111166329B (en) * 2018-10-24 2024-01-30 四川锦江电子医疗器械科技股份有限公司 Stretchable annular catheter form determination method and device
US11547492B2 (en) * 2018-11-07 2023-01-10 St Jude Medical International Holding, Sa.R.L. Mechanical modules of catheters for sensor fusion processes
CN109718437A (en) * 2018-12-28 2019-05-07 北京谊安医疗系统股份有限公司 Respiration parameter adjusting method, device and the Breathing Suppotion equipment of Breathing Suppotion equipment
CN111588464B (en) * 2019-02-20 2022-03-04 忞惪医疗机器人(苏州)有限公司 Operation navigation method and system
CN110478040A (en) * 2019-08-19 2019-11-22 王小丽 Obtain the method and device of alimentary stent implantation navigation image

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102487602A (en) * 2010-08-27 2012-06-06 奥林巴斯医疗株式会社 Endoscope shape detection device and method for detecting shape of insertion portion of endoscope
CN103648361A (en) * 2011-05-13 2014-03-19 直观外科手术操作公司 Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery
CN104540439A (en) * 2012-08-14 2015-04-22 直观外科手术操作公司 Systems and methods for registration of multiple vision systems
US20140187915A1 (en) * 2012-12-27 2014-07-03 General Electric Company Method and system for position orientation correction in navigation
WO2014141968A1 (en) * 2013-03-12 2014-09-18 オリンパスメディカルシステムズ株式会社 Endoscopic system
US20170065206A1 (en) * 2014-04-29 2017-03-09 Koninklijke Philips N.V. Device for determining a specific position of a catheter
CN104306072A (en) * 2014-11-07 2015-01-28 刘弘毅 Medical navigation system and method
CN108451639A (en) * 2017-02-22 2018-08-28 柯惠有限合伙公司 Multi-data source for positioning and navigating is integrated
CN109124766A (en) * 2017-06-21 2019-01-04 韦伯斯特生物官能(以色列)有限公司 It is sensed using trace information with shape and is registrated to improve
CN108175502A (en) * 2017-11-29 2018-06-19 苏州朗开信通信息技术有限公司 A kind of bronchoscope electromagnetic navigation system
WO2020070647A1 (en) * 2018-10-04 2020-04-09 Biosense Webster (Israel) Ltd. Computerized tomography (ct) image correction using position and direction (p&d) tracking assisted optical visualization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
白春学等主编: "《现代呼吸病学》", 30 November 2014 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (en) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 Positioning processing method, device, equipment and medium based on magnetic sensor
CN116433874A (en) * 2021-12-31 2023-07-14 杭州堃博生物科技有限公司 Bronchoscope navigation method, device, equipment and storage medium
WO2023134040A1 (en) * 2022-01-13 2023-07-20 杭州堃博生物科技有限公司 Data processing part, processing apparatus, surgical system, and device and medium

Also Published As

Publication number Publication date
CN113116524A (en) 2021-07-16
CN113100943B (en) 2023-05-23
CN215192193U (en) 2021-12-17
CN113116475A (en) 2021-07-16
CN113116524B (en) 2023-05-05
CN113116475B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN113100943A (en) Navigation processing method, device, system, equipment and medium in physiological channel
EP3417759B1 (en) Improvement of registration with trajectory information with shape sensing
US11931141B2 (en) Hybrid registration method
US11690527B2 (en) Apparatus and method for four dimensional soft tissue navigation in endoscopic applications
KR102567087B1 (en) Robotic systems and methods for navigation of luminal networks detecting physiological noise
EP3506827B1 (en) Respiration motion stabilization for lung magnetic navigation system
JP5372407B2 (en) Medical equipment
US20170340241A1 (en) Endoscopic examination support device, endoscopic examination support method, and endoscopic examination support program
US7901348B2 (en) Catheterscope 3D guidance and interface system
EP2691006B1 (en) System for shape sensing assisted medical procedure
EP3174466B1 (en) Probe localization
CN104540439A (en) Systems and methods for registration of multiple vision systems
CN104736085A (en) Determining position of medical device in branched anatomical structure
US20210378759A1 (en) Surgical tool navigation using sensor fusion
CN114332229A (en) Endoscope positioning processing method, device, operation system, equipment and medium
US20240041535A1 (en) Dynamic deformation tracking for navigational bronchoscopy
CN114271909A (en) Information processing method, device, system, equipment and medium for chest puncture
KR102501816B1 (en) A method for providing an automatic lung organ analysis service using artificial intelligence based on a patient&#39;s personalized index
CN114288523A (en) Detection method and device of flexible instrument, surgical system, equipment and medium
CN116019549A (en) Compensation calibration method, positioning navigation device and computer readable storage medium
CN116636928A (en) Registration progress detection method and system for lung trachea and electronic equipment
CN111386078A (en) Systems, methods, and computer readable media for non-rigidly registering electromagnetic navigation space to a CT volume

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant