CN113100943B - Navigation processing method, device, system, equipment and medium in physiological channel - Google Patents

Navigation processing method, device, system, equipment and medium in physiological channel Download PDF

Info

Publication number
CN113100943B
CN113100943B CN202110407995.3A CN202110407995A CN113100943B CN 113100943 B CN113100943 B CN 113100943B CN 202110407995 A CN202110407995 A CN 202110407995A CN 113100943 B CN113100943 B CN 113100943B
Authority
CN
China
Prior art keywords
detection information
sensor
sensors
kth
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110407995.3A
Other languages
Chinese (zh)
Other versions
CN113100943A (en
Inventor
余坤璋
李楠宇
陈日清
徐宏
苏晨晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Kunbo Biotechnology Co Ltd
Original Assignee
Hangzhou Kunbo Biotechnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kunbo Biotechnology Co Ltd filed Critical Hangzhou Kunbo Biotechnology Co Ltd
Publication of CN113100943A publication Critical patent/CN113100943A/en
Application granted granted Critical
Publication of CN113100943B publication Critical patent/CN113100943B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/267Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/267Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
    • A61B1/2676Bronchoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/307Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the urinary organs, e.g. urethroscopes, cystoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/31Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Abstract

The invention provides a navigation processing method, a device, a system, equipment and a medium in a physiological channel, which adopts a catheter, wherein the catheter is provided with N sensors and an image acquisition part, and the navigation processing method comprises the following steps: after the catheter enters a physiological channel, acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part, wherein the detection information represents the position of a catheter position where the corresponding sensor is positioned; extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information; the simulated slice images are: an image observed when an internal channel of the model is observed at a corresponding position of the model; and determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.

Description

Navigation processing method, device, system, equipment and medium in physiological channel
Technical Field
The present invention relates to the field of medical treatment, and in particular, to a method, apparatus, system, device, and medium for navigation processing in a physiological channel.
Background
In medical activities, catheters are required to be guided into physiological channels of animals or human bodies, and further, the implementation of endoscopic, biopsy and other processes can be facilitated. After the catheter enters the physiological channel, navigation of the catheter's position within the physiological channel is often required.
In the prior art, a sensor can be arranged on the catheter, the motion track of the sensor is acquired, and then the position of the catheter is positioned through registration between the motion track and the physiological channel detection map. However, in this process, it is difficult to accurately and effectively acquire a continuous motion trajectory, so that registration and positioning may be affected, and accuracy of navigation in the physiological channel may be reduced.
Disclosure of Invention
The invention provides a navigation processing method, a device, a system, equipment and a medium in a physiological channel, which are used for solving the problem of poor accuracy of navigation in the physiological channel.
According to a first aspect of the present invention, there is provided a navigation processing method in a physiological channel, using a catheter, where the catheter is provided with N sensors and an image acquisition unit, and the N sensors are sequentially distributed at different positions along the length direction of the catheter, where N is greater than or equal to 2;
the navigation processing method comprises the following steps:
After the catheter enters a physiological channel, acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part, wherein the detection information represents the position of a catheter position where the corresponding sensor is positioned;
extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information;
and determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
Therefore, the invention can provide a reliable basis for positioning and reflect the shape of the physiological channel by using the model of the physiological channel, and can provide a reliable, accurate and sufficient basis for positioning the catheter based on the channel real image and the simulated slice image on the basis. Meanwhile, the invention extracts the simulated slice images based on the detection information of the sensor, thereby avoiding using all simulated slice images of the model, and further effectively reducing the data quantity required to be processed in positioning, simplifying the processing flow and improving the processing efficiency through the registration of the local simulated slice images (i.e. a plurality of simulated slice images).
Optionally, extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information specifically includes:
Determining a target channel range in the model of the physiological channel according to the detection information; the target channel range is matched with the channel range in which the image acquisition part is actually positioned;
and extracting the plurality of simulated slice images according to the target channel range.
Optionally, determining, according to the detection information, a target channel range in the model of the physiological channel specifically includes:
projecting the positions characterized by the detection information of the L sensors into a coordinate system of the model, and determining the positions of the L sensors in the coordinate system; wherein L is more than or equal to 1 and N is more than or equal to N;
and determining the target channel range according to the positions of the L sensors in the coordinate system and the relative intervals of the L sensors and the image acquisition part.
In the above alternative, since the target channel range is determined based on the position of the sensor, and the relative interval between the target channel range and the image acquisition portion is relatively fixed, it can be ensured that the target channel range can accurately cover the actual position of the image acquisition portion.
Optionally, the L sensors include a sensor adjacent to the image capturing section among the N sensors.
In the above alternatives, the accuracy of the target channel range (i.e., the actual position of the image capturing section can be covered more accurately) can be facilitated by the sensor adjacent to the image capturing section.
Optionally, determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images specifically includes:
determining a target image in the plurality of simulated slice images according to the similarity between the channel real image and the plurality of simulated slice images;
determining the position of the catheter in the physiological channel according to the position of the target image in the model.
In the above alternative scheme, based on the similarity between the images, accurate determination of the target image can be realized, thereby ensuring the accuracy of catheter positioning.
Optionally, before extracting the plurality of simulated slice images from the model of the physiological channel according to the detection information, the method further includes:
correcting at least part of the actual detection information of the sensors according to the actual detection information of the N sensors and the interval length information among the sensors to obtain corrected detection information, so that: the plurality of simulated slice images are extracted from the modified detection information, the interval length information characterizing the length of a portion of the catheter between the sensors in the catheter.
In the scheme, the correction of the detection information is realized, and the correction result can be constrained by the distribution position of the sensors due to the combination of the interval length information between the sensors in the correction process, so that the accuracy of the detection information after correction is improved.
Optionally, according to the actual detection information of the N sensors and the interval length information between the sensors, correcting at least part of the actual detection information of the sensors to obtain corrected detection information, which specifically includes:
for any kth sensor, according to detection information of one or more sensors between the kth sensor and the physiological channel inlet and interval length information between the kth sensor and the kth sensor, the actual detection information of the kth sensor is corrected, wherein k is greater than or equal to 2, the kth sensor refers to the kth sensor which is sequentially distributed along a target sequence in the N sensors, and the target sequence is opposite to the sequence in which the sensors sequentially enter the physiological channel.
Optionally, correcting the actual detection information of the kth sensor according to the detection information of one or more sensors between the kth sensor and the physiological channel inlet and the interval length information between the kth sensor and the kth sensor to obtain corrected detection information of the kth sensor, which specifically includes:
Predicting at least part of detection information of the kth sensor according to the actual detection information or the corrected detection information of the mth sensor and the interval length information between the kth sensor and the mth sensor to obtain the prediction detection information of the kth sensor; wherein m is less than k;
and correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain corrected detection information of the kth sensor.
In the above embodiments, the correction of the sensor is performed based on the detection information of the sensor before the sensor, and the deeper into the human body in the physiological channel exemplified by the bronchial tree, the less likely the sensor before the sensor is disturbed by the influence of the physiological reaction (for example, the influence of respiration), the less the disturbance received by the sensor before the sensor is disturbed by the respiration, and the closer the sensor before the sensor is positioned to the upper lobe of the lung exemplified by the bronchial tree, the less the disturbance received by the respiration is disturbed. Furthermore, the front sensor is utilized to correct and compensate the rear sensor, so that the influence of interference on the detection result can be eliminated or reduced, and the accuracy of detection information can be improved.
Optionally, m=k-1, and the detection information of at least part of the sensors is sequentially corrected along the target sequence.
In the scheme, each correction can be ensured to be carried out based on more accurate detection information.
Optionally, the predicted detection information includes position information of a predicted position of the kth sensor, and a distance between the predicted position and a position characterized by the detection information of the mth sensor matches interval length information between the kth sensor and the mth sensor.
Optionally, the detection information further characterizes the posture of the catheter position where the corresponding sensor is located;
correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain corrected detection information of the kth sensor, wherein the method comprises the following steps:
determining a corresponding extension line according to the actual detection information or the corrected detection information of the mth sensor, wherein the position of the extension line is matched with the position represented by the corresponding detection information, and the extension direction of the extension line is matched with the gesture represented by the corresponding detection information;
and determining the predicted position according to the extension line and the interval length information between the kth sensor and the mth sensor.
In the schemes, the position and the gesture of the mth sensor can be fully considered in the position prediction of the kth sensor, so that the correction result can be accurate, the position and the gesture of the mth sensor can be fully considered, and the correction accuracy is improved.
Optionally, the predicted detection information further includes posture information of a predicted posture of the kth sensor, and the predicted posture is matched with the posture of the mth sensor.
In the scheme, the gesture of the kth sensor can be fully considered in gesture prediction of the mth sensor, so that the correction accuracy is improved, and the corrected position based on the gesture can be more accurate.
Optionally, correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain corrected detection information of the kth sensor, which specifically includes:
correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information;
wherein the correction reference information includes: the first correction reference information characterizes the matching degree of the detection information corrected by the corresponding sensor and the prediction detection information, and/or the second correction reference information characterizes the matching degree of the detection information corrected by the corresponding sensor and the actual detection information.
Optionally, the revised reference information for the different order sensors is different, and:
the closer the N sensors are to the entrance of the physiological channel, the lower the matching degree represented by the first correction reference information of the sensors is, and the higher the matching degree represented by the second correction reference information is.
In the above scheme, as the sensor is closer to the entrance of the physiological channel, the interference suffered by the sensor is smaller (for example, the sensor is closer to the upper lung leaf, and the respiratory interference is smaller), correspondingly, the correction reference information of different sensors in the above scheme can be more accurately matched with the order in which the sensors are positioned, so that the size distribution of the interference is more accurately matched, and the correction accuracy is ensured.
Optionally, correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information specifically includes:
according to the correction reference information, carrying out weighted summation on the prediction detection information of the kth sensor and the actual detection information of the kth sensor to obtain detection information after correction of the kth sensor; the first correction reference information is a first weighted value corresponding to the prediction detection information, and the second correction reference information is a second weighted value corresponding to the actual detection information.
In the scheme, a quantifiable processing means is provided for correction of the detection information, and based on a weighted summation mode, the prediction detection information and the actual detection information can be effectively considered based on the weighted value, and meanwhile, the relative simplification of an algorithm can be ensured.
Optionally, according to the corrected reference information, the weighted summation is performed on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain corrected detection information of the kth sensor, which specifically includes:
correcting the actual monitoring information of the kth sensor based on the following formula:
(x k ′,y k ′,z k ,′α k ′,β k ′,γ k ′)=(1-λ)(x k ,y k ,z k ,α k ,β k ,γ k )+λ(x p ,y p ,z p ,α p ,β p ,γ p )
wherein:
(x k ′,y k ′,z k ,′α k ′,β k ′,γ k ') the detection information after the k sensor correction is characterized;
x k ' characterizing coordinates in the x-axis direction in the detection information corrected by the kth sensor;
y k ' characterizing the coordinates in the y-axis direction in the detection information corrected by the kth sensor;
z k ' characterizing the z-axis coordinate in the detection information corrected by the kth sensor;
α k ' characterizing a rotation angle around an x-axis in the detection information corrected by the kth sensor;
β k ' characterizing a rotation angle around a y-axis in the detection information corrected by the kth sensor;
γ k ' characterizing a rotation angle around a z-axis in the detection information corrected by the kth sensor;
(x k ,y k ,z k ,α k ,β k ,γ k ) Characterizing actual monitoring information of a kth sensor;
x k characterizing the coordinates in the x-axis direction in the actual detection information of the kth sensor;
y k characterizing the coordinate in the y-axis direction in the actual detection information of the kth sensor;
z k characterizing the coordinate in the z-axis direction in the actual detection information of the kth sensor;
α k characterizing the rotation angle around the x axis in the actual detection information of the kth sensor;
β k characterizing the rotation angle around the y axis in the actual detection information of the kth sensor;
γ k characterizing the rotation angle around the z axis in the actual detection information of the kth sensor;
(x p ,y p ,z p ,α p ,β p ,γ p ) Predictive detection information characterizing a kth sensor;
x p characterizing coordinates in the x-axis direction in the predictive detection information of the kth sensor;
y p characterizing coordinates in a y-axis direction in the predictive detection information of the kth sensor;
z p characterizing the z-axis coordinate in the predictive detection information of the kth sensor;
α p characterizing a rotation angle around an x-axis in the predictive detection information of the kth sensor;
β p characterizing a rotation angle around a y-axis in the predictive detection information of the kth sensor;
γ p Characterizing a rotation angle around a z-axis in the predictive detection information of the kth sensor;
λ is the first weighted value;
and 1-lambda is the second weighted value.
Optionally, the navigation processing method further includes:
before the catheter enters a physiological channel, establishing the model according to scanning data of the physiological channel;
and determining a navigation path according to the model and the marked target point, and using the navigation path as a movement basis after the catheter enters a physiological channel.
In the above alternative scheme, because the model is established according to the scanning data, the model can be effectively ensured to accurately reflect the form of the physiological channel, and meanwhile, the navigation path determined based on the model can accurately guide the catheter to travel in the physiological channel.
Optionally, the physiological channel is a bronchial tree.
According to a second aspect of the present invention, there is provided a navigation processing device in a physiological channel, which adopts a catheter, wherein the catheter is provided with N sensors and an image acquisition part, and the N sensors are sequentially distributed at different positions in the length direction of the catheter, wherein N is greater than or equal to 2;
a navigation processing device comprising:
The acquisition module is used for acquiring the actual detection information of the N sensors and the channel real image acquired by the image acquisition part after the catheter enters the physiological channel, wherein the detection information represents the position of the catheter position where the corresponding sensor is positioned;
the slice extraction module is used for extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information;
and the positioning module is used for determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
According to a third aspect of the present invention, there is provided an electronic device comprising a processor and a memory,
the memory is used for storing codes;
the processor is configured to execute the code in the memory to implement the navigation processing method related to the first aspect and its alternatives.
According to a fourth aspect of the present invention, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the navigation processing method of the first aspect and alternatives thereof.
According to a fifth aspect of the present invention, there is provided a navigation system within a physiological channel, comprising: the device comprises a catheter, N sensors, an image acquisition part and a data processing part, wherein the N sensors and the image acquisition part are arranged on the catheter, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and the data processing part can be directly or indirectly communicated with the N sensors and the image acquisition part;
The data processing section is configured to execute the navigation processing method related to the first aspect and its alternatives.
Optionally, the sensor is a magnetic navigation sensor.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic diagram of a navigation system within a physiological channel in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the geometry of the catheter, image acquisition unit and sensor according to one embodiment of the invention;
FIG. 3 is a flow chart of a navigation processing method in a physiological channel according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a model in an embodiment of the invention;
FIG. 5 is a flowchart of step S22 according to an embodiment of the present invention;
FIG. 6 is a flowchart of step S221 in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a model and target channel range in accordance with an embodiment of the present invention;
FIG. 8 is a flowchart of step S23 according to an embodiment of the present invention;
FIG. 9 is a second flow chart of a navigation processing method in a physiological channel according to an embodiment of the invention;
FIG. 10 is a schematic diagram of a model and navigation path in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram of respiratory disturbance in an embodiment of the present invention;
FIG. 12 is a flowchart of step S26 according to an embodiment of the present invention;
FIG. 13 is a flowchart of step S261 in an embodiment of the present invention;
FIG. 14 is a block diagram illustrating a navigation processing device in a physiological channel according to an embodiment of the present invention;
FIG. 15 is a second program module of the navigation processing device in the physiological channel according to an embodiment of the present invention;
fig. 16 is a schematic view of the configuration of an electronic device in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The navigation processing method and apparatus in a physiological channel provided in the embodiments of the present invention may be applied to an execution body (for example, a device or a combination of devices) having a data processing capability, and may be specifically understood as the electronic device 50 and the data processing unit 103 that are referred to later. At least part of the steps of the navigation processing method may be implemented based on LungPoint software.
Referring to fig. 1, the navigation system in the physiological channel may include a catheter 101, N sensors 102, where N is greater than or equal to 2, for example, 5, 6, 7, 8, 9, 10, etc., and the number of the sensors may be arbitrarily selected according to the requirements of medical activities, the type and form of the physiological channel, the detection accuracy of the sensors, etc.
Meanwhile, the navigation system further includes: the image acquisition unit 104 may be a part of an endoscopic apparatus, or the image acquisition unit 104 may be an image acquisition unit provided separately from an existing endoscopic apparatus. The image acquisition section is provided in the catheter 101, which may be provided at the end of the catheter 101 (or at the end which may be understood as being remote from the entrance to the physiological channel), without excluding the possibility of being provided at other locations.
The endoscope device may be understood as a component or a combination of components capable of performing an endoscope in a physiological channel, and may further include at least one of an illumination component, a packaging material, and the like, in addition to the image acquisition unit, but is not limited thereto, and may be configured to be assembled and packaged together. The endoscopic device may be provided at the distal end of the catheter 101 or at a position other than the distal end.
The catheter 201 may be understood as a structure provided with sensors and adapted to deliver N sensors into a physiological channel, for example, may comprise a flexible tube, or may comprise a rigid tube, wherein instruments for guiding the catheter may be provided, or other instruments for medical activities may be provided, or lines, circuits, structures for electrically connecting the sensors 102, the image acquisition unit 104, and the outside may be provided.
The sensor 102 may be understood as a sensor capable of detecting the position of the sensor 102, and when the sensor 102 is disposed in a catheter, the sensor may be understood as a sensor capable of detecting the position of the catheter where the sensor 102 is disposed, and in some embodiments, the sensor may also detect the posture of the sensor (i.e. detect the posture of the catheter where the sensor 102 is disposed), and further, the detection information detected by the sensor may characterize the position (or the position and the posture) of the catheter where the sensor 202 is disposed. In addition, the detection information that can be detected by the sensor is not limited to the position and the posture.
Any sensor in the art that can detect position (or position and attitude) does not depart from the scope of the embodiments of the present invention. In a further embodiment, the sensor 102 may be a magnetic navigation sensor, an optical fiber sensor, a shape sensor, or the like, and no matter what kind of sensor is used, the sensor does not deviate from the scope of the embodiment of the present invention.
If the physiological channel is a bronchial tree, the entirety of the image acquisition unit and the catheter (or the entirety of the endoscopic device and the catheter) can also be understood as a bronchoscope. Positioning of the catheter is also understood as positioning of the bronchoscope.
In the embodiment of the present invention, referring to the geometric schematic diagram shown in fig. 2, N sensors 102 are sequentially distributed at different positions along the length direction of the catheter 101, and further, a length of a catheter portion may be spaced between two adjacent sensors 102, where the length of the spaced catheter portion may be uniform or non-uniform, and in the example shown in fig. 2, the number of the sensors 102 is six.
The execution body described above may communicate with the sensor and the image acquisition unit, and the communication may be performed by wired or wireless means.
In an embodiment of the present invention, please refer to fig. 3, a navigation processing method includes:
s21: after the catheter enters a physiological channel, acquiring actual detection information of the N sensors and a channel real image acquired by the image acquisition part;
s22: extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information;
s23: and determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
The channel real image in step S21 may be understood as any image acquired by the image acquisition unit, which may be acquired in real time, or may be acquired at certain intervals, or may be acquired during the whole process of entering the physiological channel, or may be acquired at certain time, or may be acquired in any manner, without departing from the scope of the embodiment of the present invention.
The model of the physiological channel in step S22 may be, for example, a model 301 shown in fig. 4 (for convenience of disclosure, fig. 4 has a simplified process for the model, which may not be the case for an actual model). It may be any model describing a physiological channel.
The physiological channel may be any physiological channel of any human or animal body, for example, a bronchial tree (which may be understood by referring to the model shown in fig. 4), and in other examples, the physiological channel may be a channel of the urinary system, a channel of the digestive system, or the like. The physiologic tunnel can have multiple intersections (or can be understood as bifurcation) therein.
Based on bifurcation or other references, the physiological channel may be further staged, segmented, e.g., the bronchial tree may contain left and right main bronchi, lobar bronchi, further divided into upper lobes, middle lobes, lower lobes, etc., with about 24 branches from the human bronchi (stage 1) to the alveoli. How the corresponding physiological channels are divided in particular can be understood with reference to the common general knowledge in the art.
The simulated slice image in step S22 is: an image observed when an internal channel of the model is observed at a corresponding position of the model; specifically, the viewing angle of the simulated slice image may be matched with the viewing angle of the channel real image acquired by the image acquisition unit, for example, the viewing angle along the axis direction of the physiological channel may be used.
In addition, in the actual implementation, the simulation slice image of each position may be formed after the three-dimensional model is formed and rendered, or the desired simulation slice image may be formed when step S22 is performed.
In the scheme, the model of the physiological channel is used, so that a reliable basis can be provided for positioning and the form of the physiological channel can be reflected, and on the basis, a reliable, accurate and sufficient basis can be provided for positioning the catheter based on the channel real image and the simulated slice image. Meanwhile, the invention extracts the simulated slice images based on the detection information of the sensor, thereby avoiding using all simulated slice images of the model, and further effectively reducing the data quantity required to be processed in positioning, simplifying the processing flow and improving the processing efficiency through the registration of the local simulated slice images (i.e. a plurality of simulated slice images).
In one embodiment, step S22 may include:
s221: determining a target channel range in the model of the physiological channel according to the detection information;
s222: and extracting the plurality of simulated slice images according to the target channel range.
The target channel range is matched with the channel range where the image acquisition part is actually positioned; the detection information reflects the position of the corresponding part of the catheter, and the image acquisition part is arranged on the catheter, so that the channel range where the image acquisition part is actually positioned can be directly or indirectly represented based on the detection information.
Wherein matching may refer to various situations that are identical, similar, at least partially coincident, etc. The divided channel range can be divided based on any principle, taking a bronchial tree (and a model thereof) as an example, the divided channel range can refer to a certain stage of the bronchial tree, can refer to a certain section area of a certain stage, a certain section area between certain stages and the like of the bronchial tree, and can also be determined by adopting a self-defined dividing mode. As long as a section of target channel range can be defined in the model and matched with the channel range in which the image acquisition part is actually positioned, the range of the embodiment of the invention is not deviated.
Taking fig. 7 as an example, a range of target channels 302 may be determined in the model 301.
In one version of step S222, a simulated slice image (i.e., the plurality of simulated slice images) may be extracted at each location within the target channel range. In some embodiments, each position in the target channel range may be screened (e.g., based on the quality, similarity, distance between corresponding positions, etc. of each analog slice image), so that an analog slice image at a portion of the positions may be selected for extraction. In some embodiments, a larger channel range including the target channel range may be extended based on the target channel range, and then the simulated slice image may be extracted for each position within the larger channel range.
Further, referring to fig. 6, step S221 may include:
s2211: projecting the positions characterized by the detection information of the L sensors into a coordinate system of the model, and determining the positions of the L sensors in the coordinate system;
s2212: and determining the target channel range according to the positions of the L sensors in the coordinate system and the relative intervals of the L sensors and the image acquisition part.
Because each sensor and the image acquisition part are fixedly arranged on the catheter, the interval between any sensor and the image acquisition part can be used for determining the position of the image acquisition part, and then L can be equal to 1 or more than 1, namely: l is more than or equal to 1 and less than or equal to N, and is an integer.
The position represented by the detection information can be understood as the position of the sensor under a certain coordinate system, and in the above process, the position of each sensor can be projected into the coordinate system of the model based on a known or calibrated mapping relation, and then the position of each sensor is positioned in the coordinate system of the model.
In the above alternative, since the target channel range is determined based on the position of the sensor, and the relative interval between the target channel range and the image acquisition portion is relatively fixed, it can be ensured that the target channel range can accurately cover the actual position of the image acquisition portion.
In one embodiment, the L sensors include a sensor adjacent to the image capturing section among the N sensors. For example: if the image acquisition part is arranged at one end of the catheter, L can be 2, and L sensors can be selected from the head sensor and the tail sensor.
In the above alternatives, the accuracy of the target channel range (i.e., the actual position of the image capturing section can be covered more accurately) can be facilitated by the sensor adjacent to the image capturing section.
In the specific examples of step S221 and step S222, the following steps are:
taking the bronchial tree as an example, the purpose is to: by the target channel range, it is confirmed at which level and segment of the virtual bronchial tree (i.e., model) the current bronchoscope is, instead of the most accurate location. For example, after step S21 or after correction is performed in step S26, it is possible to confirm which level (LMB, RMB, RB1-12.LB1-12, etc.) of the virtual bronchial tree the bronchoscope is in, taking the 6D degree of freedom of the first sensor as the starting point and the 6D degree of freedom of the last sensor as the ending point.
Then, after determining the number of stages of the bronchoscope in the virtual bronchial tree, performing local discrete sampling on the virtual bronchial tree (i.e. model), and taking different xyz positions and different angles of αβγ into consideration to obtain a 2D image dataset, wherein the 2D image is a slice of a certain number of stages of the bronchial tree, i.e. the simulated slice image extracted in step S222.
In one embodiment, referring to fig. 8, step S23 may include:
s231: determining a target image in the plurality of simulated slice images according to the similarity between the channel real image and the plurality of simulated slice images;
s232: determining the position of the catheter in the physiological channel according to the position of the target image in the model.
The similarity calculation process may be implemented by using any existing or improved algorithm, and in one example, a COS distance, a euclidean distance, and the like between images may be calculated as information representing the similarity.
In one scheme, one or more images with highest similarity can be selected as the target image, and in other schemes, a plurality of images with highest similarity can be screened (for example, screening is performed by referring to other factors except the similarity), so that a target image is finally obtained.
In a specific example, COS distance can be used as a measure of similarity to make local registration between the channel real image and the simulated slice image. The method comprises the steps of obtaining a simulated slice image closest to a real image, calculating the accurate position of an image acquisition part, obtaining 6-degree-of-freedom data (namely the position) of a catheter and the like (such as a bronchoscope), updating the data to serve as a starting point, and navigating in real time with a target point position serving as an end point, wherein the method is realized based on local registration, all 2D slices (namely all simulated slice images) of a model of a virtual bronchoscope are not required to be searched globally, the complexity is low, and the method meets the real-time requirement.
In the above alternative scheme, based on the similarity between the images, accurate determination of the target image can be realized, thereby ensuring the accuracy of catheter positioning.
In one embodiment, referring to fig. 9, before step S21, the method further includes:
s24: before the catheter enters a physiological channel, establishing the model according to scanning data of the physiological channel;
s25: and determining a navigation path according to the model and the marked target point, and using the navigation path as a movement basis after the catheter enters a physiological channel.
The scan data may be, for example, CT scan data, and may be combined with other calibration or measurement data when establishing a model, so long as a model corresponding to the physiological channel can be formed, without departing from the scope of the embodiments of the present invention. Wherein the navigation path may be understood with reference to navigation path 303 in fig. 10.
Step S25 may be performed automatically or in combination with manual implementation, and the target point calibration may be performed by a doctor (or other person) in the model.
In an example, a CT imaging technology and a 3D rendering technology may be used to determine a model, specifically, 3D CT data (i.e. scan data of a physiological channel) may be obtained by CT scanning, then a virtual bronchial tree (i.e. model) may be obtained by the 3D rendering technology, and then, after determining the position of the target point in the virtual bronchial tree, a corresponding path plan may be given to obtain a navigation path.
In some embodiments, after step S23, the guiding information may be fed back based on the navigation path and the position determined in step S23, so that the position of the staff, the catheter (e.g. bronchoscope), etc. and the position of the end point may be informed in real time, and the automatic navigation path guidance may be given, including but not limited to: the technology can automatically position and finish the arrow, distance parameters and the like, has lower requirements on the skills of staff, and is quick and accurate in positioning process.
In the above alternative scheme, because the model is established according to the scanning data, the model can be effectively ensured to accurately reflect the form of the physiological channel, and meanwhile, the navigation path determined based on the model can accurately guide the catheter to travel in the physiological channel.
In one embodiment, the detection information of the sensor may be disturbed due to the influence of the physiological activity (e.g. respiration), so that the detection result is difficult to match with the model, and thus the detection information needs to be corrected before step S22.
The need for correction of the detected information will be explained below with reference to fig. 11, taking the effect of respiratory disturbance on the bronchial tree as an example.
Because of respiratory disturbances, the lungs contract during exhalation and expand during inhalation, there is a great difference in their shape from the pre-operative CT rendered virtual bronchial tree (i.e. model), which can also be understood as: the coordinate system of the sensor and the virtual bronchial tree (i.e., model) coordinate system are not aligned. Resulting in a large deviation in the relative position (at which position of the bronchial tree) although the position coordinates of the sensor are accurate. As shown in fig. 11, the bronchial tree is contracted inward from the blue solid line to the blue dotted line. The catheter with the magnetic sensor also changes from the solid line position to the dotted line position in the figure. Direct calculation of the relative position of the virtual bronchial tree before surgery can lead to erroneous decisions. Respiratory disturbances can lead to changes in relative position.
Considering the respiratory model of the lung, the respiratory deformation of the lower lobe of the lung is greater than that of the middle lobe and the upper lobe of the lung, our sensors need to be as different as possible from the same lobe (e.g. some lower lobes and some middle lobes during navigation), and our number of sensors needs to be greater than 2. Since the position distribution of the sensors is known, the geometry is unchanged and the catheter length is unchanged, then: l1+l0=l2+l0. That is, the distance between every two sensors is determined, and the information of the 6D degree of freedom of the previous sensor (i.e., detection information) can be used for predicting the information of the 6D degree of freedom of the next sensor (i.e., detection information) through geometric calculation, so as to achieve the effect of correction.
As the patient breathes, the catheter 101 and the sensor 102 change from the position corresponding to L2 to the position corresponding to L1 in the figure. Direct calculation of the relative position of the virtual bronchial tree before surgery can lead to erroneous decisions. Respiratory disturbances can lead to changes in relative position. However, the sensor degrees of freedom can sense the change of the catheter while the catheter length is unchanged, l1=l2, so the coordinates of the latter sensor can be corrected by the coordinates of the former sensor.
On this basis, please refer to fig. 9, step S22 may include:
S26: and correcting at least part of the actual detection information of the sensors according to the actual detection information of the N sensors and the interval length information among the sensors to obtain corrected detection information.
Further, by step S26, it is possible to cause: the plurality of analog slice images to be extracted in step S22 are extracted according to the corrected detection information.
Wherein the interval length information characterizes the length of the catheter sections between the sensors in the catheter. Which may include the length of the conduit portion between adjacent sensors, and may also include the length of the conduit portion between non-adjacent sensors.
In the scheme, the correction of the detection information is realized, and the correction result can be constrained by the distribution position of the sensors due to the combination of the interval length information between the sensors in the correction process, so that the accuracy of the detection information after correction is improved.
Further, step S26 may specifically include:
s260: for any kth sensor, according to detection information of one or more sensors between the kth sensor and the physiological channel inlet and interval length information between the kth sensor and the kth sensor, the actual detection information of the kth sensor is corrected, wherein k is greater than or equal to 2, the kth sensor refers to kth sensors which are sequentially distributed along a target sequence in the N sensors, and the target sequence is opposite to the sequence in which the sensors sequentially enter the physiological channel, namely the sequence in a direction away from the physiological channel inlet.
Further, k may take different values one by one (e.g., 2, 3, 4, … … consecutive values, or discontinuous values), thereby performing step 260 one by one for the sensors of the at least some of the sensors.
Still further, referring to fig. 12, step S260 may specifically include:
s261: predicting at least part of detection information of the kth sensor according to the actual detection information or the corrected detection information of the mth sensor and the interval length information between the kth sensor and the mth sensor to obtain the prediction detection information of the kth sensor; wherein m is less than k;
s262: and correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain corrected detection information of the kth sensor.
In the above embodiments, the correction of the sensor is performed based on the detection information of the sensor located forward, and the deeper into the human body in the physiological channel exemplified by the bronchial tree, the less likely the sensor located forward receives the interference due to the influence of the physiological reaction (for example, the influence of respiration), and the closer the sensor located forward is to the upper lobe of the lung exemplified by the bronchial tree, the less the interference is due to respiration. Furthermore, the front sensor is utilized to correct and compensate the rear sensor, so that the influence of interference on the detection result can be eliminated or reduced, and the accuracy of detection information can be improved.
In other words, in consideration of pulmonary respiration, the detection information (e.g., coordinates, angles) of the front-end sensor is less affected than the detection information (e.g., coordinates, angles) of the rear-end sensor, and by correction of the detection information (e.g., correction of coordinates, angles), each sensor can be made to give accurate detection information.
In a specific example, m=k-1, and the detection information of at least part of the sensors is sequentially corrected along the target sequence. Further, the detection information of the sensors may be corrected one by one from front to back, and the detection information of the last sensor in the target order among the adjacent sensors is used to correct the next sensor. In the scheme, each correction can be ensured to be carried out based on more accurate detection information.
In other examples, m may not be equal to k-1, and the sensor for correcting the detection information of the kth sensor may not be limited to one.
Wherein, the front sensor and the front sensor refer to the front sensor and the front sensor along the target sequence; the latter sensor and the rear sensor refer to the rear and rear sensors in the target order.
In one embodiment, the predicted detection information includes position information of a predicted position of the kth sensor, and a distance between the predicted position and a position characterized by the detection information of the mth sensor matches interval length information between the kth sensor and the mth sensor. The matching between the distance and the interval may be the same or similar (e.g., less than a certain distance threshold). Therefore, the constraint of the position and interval length information of the mth sensor on the predicted position is realized in the scheme, and the predicted result is ensured to be accurately matched with the position and the interval length.
In addition to distance, the pose of the mth sensor may also put constraints on the predicted position.
Thus, referring to fig. 13, step S261 may include:
s2611: determining a corresponding extension line according to the actual detection information or the corrected detection information of the mth sensor;
s2612: and determining the predicted position according to the extension line and the interval length information between the kth sensor and the mth sensor.
Wherein the position of the extension line matches the position characterized by the corresponding detection information, for example, the extension line may pass through the position in the detection information of the mth sensor (for example, the coordinates of x, y and z in the detection information need to be passed through), and the extension direction of the extension line matches the gesture characterized by the corresponding detection information (for example, the extension direction matches the α, β and γ in the detection information).
Since the posture of the sensor is actually the posture of the catheter where it is located, and it changes with the bending of the catheter, the extending direction may specifically match the tangential direction of the catheter where the sensor is located and point to the next sensor side along the target sequence, for example: the extending direction may be the same as, similar to (or at a specified angle from) the tangential direction (the angle difference is less than a certain threshold). In the scheme, the restriction of the posture of the mth sensor on the position of the kth sensor is fully considered, so that the prediction result can be accurately matched with the posture of the mth sensor (namely, the prediction result is matched with the bending condition of the corresponding catheter position).
Furthermore, the position and the gesture of the mth sensor can be fully considered in the position prediction of the kth sensor, so that the correction result can be accurate, the position and the gesture of the mth sensor can be fully considered, and the correction accuracy is improved.
In one embodiment, the predicted detection information further includes posture information of a predicted posture of the kth sensor, the predicted posture matching a posture of the mth sensor. It can be seen that the pose prediction of the kth sensor is mainly constrained by the pose of the mth sensor.
Furthermore, in the scheme, the gesture of the kth sensor can be fully considered in the gesture prediction of the mth sensor, so that the correction accuracy is improved.
In one embodiment, step S262 may include:
correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information;
wherein the correction reference information includes: the first correction reference information characterizes the matching degree of the detection information corrected by the corresponding sensor and the prediction detection information, and/or the second correction reference information characterizes the matching degree of the detection information corrected by the corresponding sensor and the actual detection information.
The first correction reference information and the second correction reference information may be any information capable of characterizing a corresponding matching degree, and the content of the correction reference information may be changed arbitrarily based on different correction algorithms, without departing from the scope of the embodiment of the present invention.
In one example, the revised reference information for the different order sensors is different, and:
the closer the N sensors are to the entrance of the physiological channel, the lower the matching degree represented by the first correction reference information of the sensors is, and the higher the matching degree represented by the second correction reference information is.
In the above scheme, as the sensor is closer to the entrance of the physiological channel, the interference suffered by the sensor is smaller (for example, the sensor is closer to the upper lung leaf, and the respiratory interference is smaller), correspondingly, the correction reference information of different sensors in the above scheme can be more accurately matched with the order in which the sensors are positioned, so that the size distribution of the interference is more accurately matched, and the correction accuracy is ensured.
Further, the magnitude of the change in the degree of matching between adjacent sensors may be the same (e.g., the first weight of each sensor along the target order may change equally, the second weight of each sensor may change equally), or the magnitude of the change in the degree of matching between adjacent sensors may be different (e.g., the differences in the first and second weights of each adjacent sensor may be different). The magnitude of the change in the degree of matching between adjacent sensors may also be correlated with the length of the gap between the sensors, e.g., the longer the gap distance, the greater the magnitude of the change in the degree of matching. Regardless of how the specifically quantized correction parameter information changes, it does not depart from the scope of the above approach.
In a further aspect, the correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information specifically includes:
and according to the corrected reference information, carrying out weighted summation on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain the detection information corrected by the kth sensor.
The first correction reference information is a first weighted value corresponding to the prediction detection information, and the second correction reference information is a second weighted value corresponding to the actual detection information.
In the scheme, a quantifiable processing means is provided for correction of the detection information, and based on a weighted summation mode, the prediction detection information and the actual detection information can be effectively considered based on the weighted value, and meanwhile, the relative simplification of an algorithm can be ensured.
In one example, the sum of the first weighted value and the second weighted value is 1, and the value of the first weighted value is less than or equal to 0.5.
In addition, in some examples, if other factors are also considered in the correction, the weighting values may further include other weighting values corresponding to the other factors.
For example, for the kth sensor, in addition to the detection information of the mth sensor, the detection information of the kth sensor (q is smaller than k and not equal to m) may be combined, the past detection information of the kth sensor (for example, the detection information of the previous time) may be combined, the detection information of the kth sensor (p is larger than k) may be combined, and at this time, the sum of the first weighted value and the second weighted value may be smaller than 1.
The six degrees of freedom data of the sensor in the three-dimensional space are: coordinates in the x-axis direction, coordinates in the y-axis direction, coordinates in the z-axis direction, rotation angle about the x-axis, rotation angle about the y-axis, rotation angle about the z-axis. The data of the six degrees of freedom can be understood as the detection information.
Taking seven sensors as an example, the x-axis coordinates of the seven sensors are respectively x 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 The method comprises the steps of carrying out a first treatment on the surface of the The y-axis coordinates are y 1 ,y 2 ,y 3 ,y 4 ,y 5 ,y 6 ,y 7 The method comprises the steps of carrying out a first treatment on the surface of the The z-axis coordinates are z 1 ,z 2 ,z 3 ,z 4 ,z 5 ,z 6 ,z 7 The method comprises the steps of carrying out a first treatment on the surface of the The three rotation angles are alpha respectively 1 ,α 2 ,α 3 ,α 4 ,α 5 ,α 6 ,α 7 ;β 1 ,β 2 ,β 3 ,β 4 ,β 5 ,β 6 ,β 7 ;γ 1 ,γ 2 ,γ 3 ,γ 4 ,γ 5 ,γ 6 ,γ 7
Further, the actual monitoring information of the kth sensor may be corrected based on the following formula:
(x k ′,y k ′,z k ,′α k ′,β k ′,γ k ′)=(1-λ)(x k ,y k ,z k ,α k ,β k ,γ k )+λ(x p ,y p ,z p ,α p ,β p ,γ p )
wherein:
(x k ′,y k ′,z k ,′α k ′,β k ′,γ k ') the detection information after the k sensor correction is characterized;
x k ' characterizing coordinates in the x-axis direction in the detection information corrected by the kth sensor;
y k ' characterizing the coordinates in the y-axis direction in the detection information corrected by the kth sensor;
z k ' characterizing the z-axis coordinate in the detection information corrected by the kth sensor;
α k ' characterizing a rotation angle around an x-axis in the detection information corrected by the kth sensor;
β k ' characterizing a rotation angle around a y-axis in the detection information corrected by the kth sensor;
γ k ' characterizing a rotation angle around a z-axis in the detection information corrected by the kth sensor;
(x k ,y k ,z k ,α k ,β k ,γ k ) Characterizing actual monitoring information of a kth sensor;
x k characterizing the coordinates in the x-axis direction in the actual detection information of the kth sensor;
y k characterizing the coordinate in the y-axis direction in the actual detection information of the kth sensor;
z k characterizing the coordinate in the z-axis direction in the actual detection information of the kth sensor;
α k characterizing the rotation angle around the x axis in the actual detection information of the kth sensor;
β k characterizing the rotation angle around the y axis in the actual detection information of the kth sensor;
γ k characterizing the rotation angle around the z axis in the actual detection information of the kth sensor;
(x p ,y p ,z p ,α p ,β p ,γ p ) Predictive detection information characterizing a kth sensor;
x p characterizing coordinates in the x-axis direction in the predictive detection information of the kth sensor;
y p characterizing coordinates in a y-axis direction in the predictive detection information of the kth sensor;
z p Characterizing the z-axis coordinate in the predictive detection information of the kth sensor;
α p characterizing a rotation angle around an x-axis in the predictive detection information of the kth sensor;
β p characterizing a rotation angle around a y-axis in the predictive detection information of the kth sensor;
γ p characterizing a rotation angle around a z-axis in the predictive detection information of the kth sensor;
λ is the first weighted value;
and 1-lambda is the second weighted value.
It can be seen that considering the respiratory model of the lung, the respiratory deformation of the lower lobe of the lung is greater than that of the middle lobe and the upper lobe of the lung, and the noise epsilon is increased sequentially from top to bottom. Based on this, the above proposal proposes a method of correcting in turn, and respiratory compensation is performed on the detection information of the following sensor by using the coordinates and angles (i.e., detection information) of the preceding sensor (closer to the upper lobe of the lung, less disturbed by respiration), so as to obtain more accurate coordinates and angles. And coordinates and angles are corrected by calculating distances (i.e., interval length information) and giving weights (embodied by the first weighting value λ and the second weighting values 1- λ).
In addition, as the information of the six degrees of freedom of the kth-1 sensor and the interval length information between the kth sensor and the kth sensor are known, under the constraint of the information and the information, the detection information of the kth sensor can be predicted by adopting any existing or improved prediction algorithm in the field to obtain corresponding prediction detection information, and other information can be combined for prediction in part of schemes. The predicted detection information may be all detection information (e.g., data of six degrees of freedom) of the kth sensor, or may be partial detection information (e.g., x-axis coordinates, y-axis coordinates, and z-axis coordinates) of the kth sensor. Whatever the detection information is predicted, the manner of prediction is adopted without departing from the scope of the above embodiments.
In the specific scheme of the steps S21 to S26, CT scanning can be performed on a patient before operation to generate a virtual bronchial tree (i.e. model), a plurality of sensors can be used for obtaining coarse-granularity local positioning in consideration of inaccurate positioning of the sensors caused by respiratory interference, a bronchoscope (i.e. an image acquisition part) of the sensors is locked in a certain area of the stage number of the virtual bronchial tree, and then local registration is performed through a channel real image and a 2D slice image (i.e. a simulated slice image) in the stage number of the bronchial tree to obtain fine-granularity positions, so that alignment of a sensor coordinate system and a model coordinate system is realized, and accurate positioning is realized.
Referring to fig. 14, a navigation processing device 400 in a physiological channel includes:
the acquiring module 401 acquires actual detection information of the N sensors and a channel real image acquired by the image acquisition part after the catheter enters the physiological channel, wherein the detection information represents the position of the catheter position where the corresponding sensor is located;
a slice extraction module 402, configured to extract a plurality of simulated slice images from the model of the physiological channel according to the detection information; the simulated slice images are: an image observed when an internal channel of the model is observed at a corresponding position of the model;
A positioning module 403, configured to determine a position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
Referring to fig. 15, the navigation processing device 400 in the physiological channel further includes:
the correction module 406 is configured to correct at least part of the actual detection information of the sensors according to the actual detection information of the N sensors and the interval length information between the sensors, so as to obtain corrected detection information, so that: the plurality of simulated slice images are extracted from the modified detection information, the interval length information characterizing the length of a portion of the catheter between the sensors in the catheter.
Referring to fig. 15, the navigation processing device 400 in the physiological channel further includes:
a modeling module 404, configured to build the model according to scan data of the physiological channel before the catheter enters the physiological channel;
the path determining module 405 is configured to determine a navigation path according to the model and the marked target point, so as to use the navigation path as a movement basis after the catheter enters the physiological channel.
Referring to fig. 16, there is provided an electronic device 50 including:
A processor 51; the method comprises the steps of,
a memory 52 for storing executable instructions of the processor;
wherein the processor 51 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 51 is capable of communicating with the memory 52 via the bus 53.
The embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the methods referred to above.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (18)

1. The navigation processing device in the physiological channel is characterized by adopting a catheter, wherein the catheter is provided with N sensors and an image acquisition part, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and N is more than or equal to 2;
a navigation processing device comprising:
the acquisition module is used for acquiring the actual detection information of the N sensors and the channel real image acquired by the image acquisition part after the catheter enters the physiological channel, wherein the detection information represents the position of the catheter position where the corresponding sensor is positioned;
the correction module is used for predicting at least part of detection information of any kth sensor in the N sensors according to the actual detection information or corrected detection information of the kth sensor and the interval length information between the kth sensor and the mth sensor to obtain the prediction detection information of the kth sensor; wherein m is less than k, wherein k is greater than or equal to 2; the kth sensor refers to a kth sensor which is sequentially distributed along a target sequence in the N sensors, and the target sequence is opposite to the sequence of the sensors sequentially entering the physiological channel; correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information to obtain corrected detection information of the kth sensor, so that: a plurality of simulated slice images are extracted based on the corrected detection information, the interval length information characterizing the length of a portion of the catheter between the sensors in the catheter; wherein the correction reference information includes: the first correction reference information characterizes the matching degree of the detection information corrected by the corresponding sensor and the prediction detection information, and/or the second correction reference information characterizes the matching degree of the detection information corrected by the corresponding sensor and the actual detection information;
The slice extraction module is used for extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information; the simulated slice images are: an image observed when an internal channel of the model is observed at a corresponding position of the model;
and the positioning module is used for determining the position of the catheter in the physiological channel according to the channel real image and the plurality of simulated slice images.
2. The navigation processing device of claim 1, wherein,
extracting a plurality of simulated slice images from the model of the physiological channel according to the detection information, specifically comprising:
determining a target channel range in the model of the physiological channel according to the detection information; the target channel range is matched with the channel range in which the image acquisition part is actually positioned;
and extracting the plurality of simulated slice images according to the target channel range.
3. The navigation processing device of claim 2, wherein,
according to the detection information, determining a target channel range in the model of the physiological channel specifically comprises the following steps:
projecting the positions characterized by the detection information of the L sensors into a coordinate system of the model, and determining the positions of the L sensors in the coordinate system; wherein L is more than or equal to 1 and N is more than or equal to N;
And determining the target channel range according to the positions of the L sensors in the coordinate system and the relative intervals of the L sensors and the image acquisition part.
4. A navigation processing device according to claim 3, wherein the L sensors include a sensor of the N sensors adjacent to the image capturing section.
5. The navigation processing device of any of claims 1 to 4, wherein determining the position of the catheter in the physiological channel from the channel real image and the plurality of simulated slice images, in particular comprises:
determining a target image in the plurality of simulated slice images according to the similarity between the channel real image and the plurality of simulated slice images;
determining the position of the catheter in the physiological channel according to the position of the target image in the model.
6. The navigation processing device of claim 1, wherein m = k-1, the detection information of the at least some sensors is sequentially modified along the target order.
7. The navigation processing device according to claim 1, wherein the predicted detection information includes position information of a predicted position of the kth sensor, a distance between the predicted position and a position characterized by the detection information of the mth sensor matching interval length information between the kth sensor and the mth sensor.
8. The navigation processing device of claim 7, wherein the detection information further characterizes a pose of a catheter site where the corresponding sensor is located;
correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor to obtain corrected detection information of the kth sensor, wherein the method comprises the following steps:
determining a corresponding extension line according to the actual detection information or the corrected detection information of the mth sensor, wherein the position of the extension line is matched with the position represented by the corresponding detection information, and the extension direction of the extension line is matched with the gesture represented by the corresponding detection information;
and determining the predicted position according to the extension line and the interval length information between the kth sensor and the mth sensor.
9. The navigation processing device according to claim 1, wherein the predicted detection information further includes posture information of a predicted posture of the kth sensor, the predicted posture matching a posture of the mth sensor.
10. The navigation processing device of claim 1, wherein the revised reference information for the different order sensors is different, and:
The closer the N sensors are to the entrance of the physiological channel, the lower the matching degree represented by the first correction reference information of the sensors is, and the higher the matching degree represented by the second correction reference information is.
11. The navigation processing device of claim 1, wherein,
correcting the actual detection information of the kth sensor according to the predicted detection information of the kth sensor and the set correction reference information, wherein the method specifically comprises the following steps:
according to the correction reference information, carrying out weighted summation on the prediction detection information of the kth sensor and the actual detection information of the kth sensor to obtain detection information after correction of the kth sensor; the first correction reference information is a first weighted value corresponding to the prediction detection information, and the second correction reference information is a second weighted value corresponding to the actual detection information.
12. The navigation processing device of claim 11, wherein,
according to the corrected reference information, the method carries out weighted summation on the predicted detection information of the kth sensor and the actual detection information of the kth sensor to obtain the detection information corrected by the kth sensor, and specifically comprises the following steps:
Correcting the actual monitoring information of the kth sensor based on the following formula:
(x k ′,y k ′,z k ′,α k ′,β k ′,γ k ′)=(1-λ)(x k ,y k ,z k ,α k ,β k ,γ k )+λ(x p ,y p ,z p ,α p ,β p ,γ p )
wherein:
(x k ′,y k ′,z k ′,α k ′,β k ′,γ k ') the detection information after the k sensor correction is characterized;
x k ' characterizing coordinates in the x-axis direction in the detection information corrected by the kth sensor;
y k ' characterizing the coordinates in the y-axis direction in the detection information corrected by the kth sensor;
z k ' characterizing the z-axis coordinate in the detection information corrected by the kth sensor;
α k ' characterizing a rotation angle around an x-axis in the detection information corrected by the kth sensor;
β k ' characterizing a rotation angle around a y-axis in the detection information corrected by the kth sensor;
γ k ' characterizing a rotation angle around a z-axis in the detection information corrected by the kth sensor;
(x k ,y k ,z k ,α k ,β k ,γ k ) Characterizing actual monitoring information of a kth sensor;
x k characterizing the coordinates in the x-axis direction in the actual detection information of the kth sensor;
y k characterizing the coordinate in the y-axis direction in the actual detection information of the kth sensor;
z k characterizing the sitting in the z-axis direction in the actual sensed information of the kth sensorMarking;
α k characterizing the rotation angle around the x axis in the actual detection information of the kth sensor;
β k Characterizing the rotation angle around the y axis in the actual detection information of the kth sensor;
γ k characterizing the rotation angle around the z axis in the actual detection information of the kth sensor;
(x p ,y p ,z p ,α p ,β p ,γ p ) Predictive detection information characterizing a kth sensor;
x p characterizing coordinates in the x-axis direction in the predictive detection information of the kth sensor;
y p characterizing coordinates in a y-axis direction in the predictive detection information of the kth sensor;
z p characterizing the z-axis coordinate in the predictive detection information of the kth sensor;
α p characterizing a rotation angle around an x-axis in the predictive detection information of the kth sensor;
β p characterizing a rotation angle around a y-axis in the predictive detection information of the kth sensor;
γ p characterizing a rotation angle around a z-axis in the predictive detection information of the kth sensor;
λ is the first weighted value;
and 1-lambda is the second weighted value.
13. The navigation processing device according to any one of claims 1 to 4, further comprising:
before the catheter enters a physiological channel, establishing the model according to scanning data of the physiological channel;
and determining a navigation path according to the model and the marked target point, and using the navigation path as a movement basis after the catheter enters a physiological channel.
14. The navigation processing device of any of claims 1-4, wherein the physiological channel is a bronchial tree.
15. An electronic device, comprising a processor and a memory,
the memory is used for storing codes;
the processor is configured to execute the code in the memory to implement the functions implemented by the navigation processing device of any of claims 1 to 14.
16. A storage medium having stored thereon a computer program which, when executed by a processor, realizes the functions realized by the navigation processing device of any one of claims 1 to 14.
17. A navigation system within a physiological channel, comprising: the device comprises a catheter, N sensors, an image acquisition part and a data processing part, wherein the N sensors and the image acquisition part are arranged on the catheter, the N sensors are sequentially distributed at different positions in the length direction of the catheter, and the data processing part can be directly or indirectly communicated with the N sensors and the image acquisition part;
the data processing section is configured to execute the functions realized by the navigation processing device according to any one of claims 1 to 14.
18. The in-channel navigation system of claim 17, wherein the sensor is a magnetic navigation sensor.
CN202110407995.3A 2020-12-31 2021-04-15 Navigation processing method, device, system, equipment and medium in physiological channel Active CN113100943B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020116378326 2020-12-31
CN202011637832 2020-12-31

Publications (2)

Publication Number Publication Date
CN113100943A CN113100943A (en) 2021-07-13
CN113100943B true CN113100943B (en) 2023-05-23

Family

ID=76717458

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202110408017.0A Active CN113116475B (en) 2020-12-31 2021-04-15 Transcatheter navigation processing method, device, medium, equipment and navigation system
CN202110407995.3A Active CN113100943B (en) 2020-12-31 2021-04-15 Navigation processing method, device, system, equipment and medium in physiological channel
CN202120775579.4U Active CN215192193U (en) 2020-12-31 2021-04-15 In-vivo navigation device, in-vivo navigation system and medical treatment system
CN202110406710.4A Active CN113116524B (en) 2020-12-31 2021-04-15 Detection compensation method, device, navigation processing method, device and navigation system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110408017.0A Active CN113116475B (en) 2020-12-31 2021-04-15 Transcatheter navigation processing method, device, medium, equipment and navigation system

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202120775579.4U Active CN215192193U (en) 2020-12-31 2021-04-15 In-vivo navigation device, in-vivo navigation system and medical treatment system
CN202110406710.4A Active CN113116524B (en) 2020-12-31 2021-04-15 Detection compensation method, device, navigation processing method, device and navigation system

Country Status (1)

Country Link
CN (4) CN113116475B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305686A (en) * 2021-12-20 2022-04-12 杭州堃博生物科技有限公司 Positioning processing method, device, equipment and medium based on magnetic sensor
CN116433874A (en) * 2021-12-31 2023-07-14 杭州堃博生物科技有限公司 Bronchoscope navigation method, device, equipment and storage medium
CN114041741B (en) * 2022-01-13 2022-04-22 杭州堃博生物科技有限公司 Data processing unit, processing device, surgical system, surgical instrument, and medium
WO2023179339A1 (en) * 2022-03-23 2023-09-28 上海微创微航机器人有限公司 Catheter shape and force sensing method, surgical navigation method, and interventional operation system

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8611983B2 (en) * 2005-01-18 2013-12-17 Philips Electronics Ltd Method and apparatus for guiding an instrument to a target in the lung
US20080249395A1 (en) * 2007-04-06 2008-10-09 Yehoshua Shachar Method and apparatus for controlling catheter positioning and orientation
US9775538B2 (en) * 2008-12-03 2017-10-03 Mediguide Ltd. System and method for determining the position of the tip of a medical catheter within the body of a patient
JP4897122B2 (en) * 2010-05-31 2012-03-14 オリンパスメディカルシステムズ株式会社 Endoscope shape detection apparatus and endoscope shape detection method
JP4897123B1 (en) * 2010-08-27 2012-03-14 オリンパスメディカルシステムズ株式会社 Endoscope shape detection apparatus and endoscope shape detection method
US8403829B2 (en) * 2010-08-27 2013-03-26 Olympus Medical Systems Corp. Endoscopic form detection device and form detecting method of insertion section of endoscope
US8900131B2 (en) * 2011-05-13 2014-12-02 Intuitive Surgical Operations, Inc. Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery
EP2755591B1 (en) * 2011-09-16 2020-11-18 Auris Health, Inc. System for displaying an image of a patient anatomy on a movable display
WO2014028394A1 (en) * 2012-08-14 2014-02-20 Intuitive Surgical Operations, Inc. Systems and methods for registration of multiple vision systems
US10082395B2 (en) * 2012-10-03 2018-09-25 St. Jude Medical, Atrial Fibrillation Division, Inc. Scaling of electrical impedance-based navigation space using inter-electrode spacing
US9002437B2 (en) * 2012-12-27 2015-04-07 General Electric Company Method and system for position orientation correction in navigation
CN104780826B (en) * 2013-03-12 2016-12-28 奥林巴斯株式会社 Endoscopic system
US10098565B2 (en) * 2013-09-06 2018-10-16 Covidien Lp System and method for lung visualization using ultrasound
JP2015181643A (en) * 2014-03-24 2015-10-22 オリンパス株式会社 Curved shape estimation system, tubular insert system, and method for estimating curved shape of curved member
BR112016025066A2 (en) * 2014-04-29 2017-08-15 Koninklijke Philips Nv device, system, and method for determining a specific position of a distal end of a catheter in an anatomical structure, computer program element for controlling a device, and computer readable media
KR102419094B1 (en) * 2014-07-28 2022-07-11 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 Systems and methods for planning multiple interventional procedures
CN104306072B (en) * 2014-11-07 2016-08-31 常州朗合医疗器械有限公司 Medical treatment navigation system and method
WO2016106114A1 (en) * 2014-12-22 2016-06-30 Intuitive Surgical Operations, Inc. Flexible electromagnetic sensor
KR102542190B1 (en) * 2015-04-06 2023-06-12 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 System and method of registration compensation in image-guided surgery
US10317196B2 (en) * 2015-06-17 2019-06-11 The Charles Stark Draper Laboratory, Inc. Navigation systems and methods using fiber optic shape sensors and localized position sensors
GB2567750B (en) * 2016-03-13 2022-02-09 Synaptive Medical Inc System and method for sensing tissue deformation
WO2018013341A1 (en) * 2016-07-15 2018-01-18 St. Jude Medical, Cardiology Division, Inc. Methods and systems for generating smoothed images of an elongate medical device
WO2018125917A1 (en) * 2016-12-28 2018-07-05 Auris Surgical Robotics, Inc. Apparatus for flexible instrument insertion
US11793579B2 (en) * 2017-02-22 2023-10-24 Covidien Lp Integration of multiple data sources for localization and navigation
WO2018183727A1 (en) * 2017-03-31 2018-10-04 Auris Health, Inc. Robotic systems for navigation of luminal networks that compensate for physiological noise
US10575907B2 (en) * 2017-06-21 2020-03-03 Biosense Webster (Israel) Ltd. Registration with trajectory information with shape sensing
CN108175502B (en) * 2017-11-29 2021-08-17 苏州朗开医疗技术有限公司 Bronchoscope electromagnetic navigation system
US20190307511A1 (en) * 2018-04-10 2019-10-10 Biosense Webster (Israel) Ltd. Catheter localization using fiber optic shape sensing combined with current location
DE102018108643A1 (en) * 2018-04-11 2019-11-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. A position determining device for determining a position of an object within a tubular structure
US11457981B2 (en) * 2018-10-04 2022-10-04 Acclarent, Inc. Computerized tomography (CT) image correction using position and direction (P andD) tracking assisted optical visualization
CN111166329B (en) * 2018-10-24 2024-01-30 四川锦江电子医疗器械科技股份有限公司 Stretchable annular catheter form determination method and device
US11547492B2 (en) * 2018-11-07 2023-01-10 St Jude Medical International Holding, Sa.R.L. Mechanical modules of catheters for sensor fusion processes
CN109718437A (en) * 2018-12-28 2019-05-07 北京谊安医疗系统股份有限公司 Respiration parameter adjusting method, device and the Breathing Suppotion equipment of Breathing Suppotion equipment
CN111588464B (en) * 2019-02-20 2022-03-04 忞惪医疗机器人(苏州)有限公司 Operation navigation method and system
CN110478040A (en) * 2019-08-19 2019-11-22 王小丽 Obtain the method and device of alimentary stent implantation navigation image

Also Published As

Publication number Publication date
CN113116475A (en) 2021-07-16
CN113116524A (en) 2021-07-16
CN113100943A (en) 2021-07-13
CN113116524B (en) 2023-05-05
CN113116475B (en) 2023-06-20
CN215192193U (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN113100943B (en) Navigation processing method, device, system, equipment and medium in physiological channel
US11690527B2 (en) Apparatus and method for four dimensional soft tissue navigation in endoscopic applications
US20200197106A1 (en) Registration with trajectory information with shape sensing
EP3506827B1 (en) Respiration motion stabilization for lung magnetic navigation system
US11202680B2 (en) Systems and methods of registration for image-guided surgery
US20200279412A1 (en) Probe localization
JP5372407B2 (en) Medical equipment
US7901348B2 (en) Catheterscope 3D guidance and interface system
US11064955B2 (en) Shape sensing assisted medical procedure
CN111281533A (en) Deformable registration of computer-generated airway models to airway trees
US20240041535A1 (en) Dynamic deformation tracking for navigational bronchoscopy
JP2019533503A (en) Positioning device for determining the position of an instrument in a tubular structure
CN111386078A (en) Systems, methods, and computer readable media for non-rigidly registering electromagnetic navigation space to a CT volume
CN114271909A (en) Information processing method, device, system, equipment and medium for chest puncture
CN116801828A (en) Dynamic deformation tracking for navigation bronchoscopy
CN114288523A (en) Detection method and device of flexible instrument, surgical system, equipment and medium
CN116636928A (en) Registration progress detection method and system for lung trachea and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant