CN112263331B - System and method for presenting medical instrument vision in vivo - Google Patents

System and method for presenting medical instrument vision in vivo Download PDF

Info

Publication number
CN112263331B
CN112263331B CN202011188144.6A CN202011188144A CN112263331B CN 112263331 B CN112263331 B CN 112263331B CN 202011188144 A CN202011188144 A CN 202011188144A CN 112263331 B CN112263331 B CN 112263331B
Authority
CN
China
Prior art keywords
optical fiber
sensing optical
curve
acquisition module
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011188144.6A
Other languages
Chinese (zh)
Other versions
CN112263331A (en
Inventor
宋新华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuyun Kairui Management Consulting Co ltd
Original Assignee
Shanghai Chuyun Kairui Management Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuyun Kairui Management Consulting Co ltd filed Critical Shanghai Chuyun Kairui Management Consulting Co ltd
Priority to CN202011188144.6A priority Critical patent/CN112263331B/en
Publication of CN112263331A publication Critical patent/CN112263331A/en
Application granted granted Critical
Publication of CN112263331B publication Critical patent/CN112263331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • A61B2034/2057Details of tracking cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2061Tracking techniques using shape-sensors, e.g. fiber shape sensors with Bragg gratings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2072Reference field transducer attached to an instrument or patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body

Abstract

The invention provides a system and a method for presenting in-vivo medical instrument vision, wherein the system comprises a sensing optical fiber, wherein a first positioning mark is set on the sensing optical fiber; a form sensing device; the head-mounted display device is provided with a second positioning mark; the form sensing device includes: the first acquisition module is used for respectively acquiring a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module; the second acquisition module is used for acquiring a total form curve of the whole sensing optical fiber relative to the second acquisition module; and the calculation module is used for acquiring a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left eyeglass and the right eyeglass and respectively outputting image signals to the left eyeglass and the right eyeglass. The scheme can visually reflect the positions of medical instruments such as an endoscope and a surgical robot in a human body, thereby being beneficial to the operation.

Description

System and method for presenting medical instrument vision in vivo
Technical Field
The invention relates to the technical field of medical treatment, in particular to a system and a method for presenting in-vivo medical instrument vision.
Background
The endoscope is a detecting instrument integrating traditional optics, ergonomics, precision machinery, modern electronics, mathematics and software into a whole, is provided with an image sensor, an optical lens, a light source illumination device, a mechanical device and the like, can enter the stomach through the oral cavity or enter the body through other natural pores, and can see pathological changes which cannot be displayed by X rays by utilizing the endoscope. In a medical operation, such as an endoscopic operation or a robotic operation, in order to ensure the accuracy of the operation, it is necessary to know or operate the position, the bent shape, and the like of the robot in the human body.
The shape perception technology can reflect the shape of the medical instrument in a human body, the shape of the medical instrument can be presented in a virtual background of a cavity and displayed on a two-dimensional display, but the method cannot present the spatial structure of the medical instrument such as an endoscope; therefore, an in-vivo medical device visual presentation system and method are provided, which can utilize a 3D image generated by helical CT in advance, accurately measure the 3D shape of a medical device such as an endoscope by using a shape sensing system, and display the two images in the same display image, thereby presenting the spatial structure of the medical device such as the endoscope in the human body.
However, since 3D images of medical instruments such as endoscopes and surgical robots in the human body obtained in the related art are displayed on a display and the positions of the medical instruments in the human body cannot be intuitively reflected, a visual display method capable of intuitively reflecting the positions of the medical instruments such as endoscopes and surgical robots in the human body is required, which is advantageous for the progress of surgery.
Disclosure of Invention
The invention aims to provide a system and a method for presenting the vision of medical instruments in a human body, which can visually reflect the positions of the medical instruments such as an endoscope, a surgical robot and the like in the human body, thereby being beneficial to the operation.
The technical scheme provided by the invention is as follows:
the invention provides an in vivo medical instrument visual presentation system, comprising: the sensing optical fiber is embedded into the flexible insertion part of the medical instrument, and a first positioning mark is set on the sensing optical fiber;
a form sensing device;
the head-mounted display device is provided with a second positioning mark;
wherein, the form perception device comprises:
the first acquisition module is used for respectively acquiring a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module;
the second acquisition module is used for acquiring a total form curve of the whole sensing optical fiber relative to the second acquisition module;
a calculation module, configured to calculate a first morphological curve of any length of the sensing optical fiber relative to the first acquisition module according to the first spatial position coordinate and the total morphological curve, calculate a second morphological curve of any length of the sensing optical fiber relative to the second positioning mark according to a position difference between the first spatial position coordinate and the second spatial position coordinate, and acquire a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left-eye lens and the right-eye lens according to a position difference between the left-eye lens and the right-eye lens of the head-mounted display device and the second positioning mark, respectively;
and the calculation module renders the three-dimensional image of the sensing optical fiber through the third morphological curve and the fourth morphological curve respectively and outputs image signals to the left eyeglass and the right eyeglass respectively.
The scheme sets a first positioning mark on a sensing optical fiber connected with medical instruments such as an endoscope, a surgical robot and the like, sets a second positioning mark on a head-mounted display device for image display, so that a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module can be acquired through a first acquisition module of a form sensing device, meanwhile, a total form curve of the whole sensing optical fiber relative to a second acquisition module can be acquired through the second acquisition module, a first form curve of any length of the sensing optical fiber relative to the first acquisition module can be calculated and acquired according to the first space position coordinate and the total form curve, a second form curve of any length of the sensing optical fiber relative to the second positioning mark can be calculated and acquired according to the position difference of the first space position coordinate and the second space position coordinate, according to the position difference between the left lens and the right lens of the head-mounted display device and the second positioning mark, a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left lens and the right lens can be obtained, the third morphological curve and the fourth morphological curve are rendered, and a three-dimensional image of the sensing optical fiber can be displayed on the left lens and the right lens, so that a doctor can visually see the positions of medical instruments such as an endoscope and a surgical robot in a human body after wearing the head-mounted display device, and the operation is facilitated.
Further, the second acquisition module comprises a fiber sensor and a demodulation system,
the optical fiber sensor is used for detecting the stress distribution of each section on each sensing optical fiber in real time,
the demodulation system is used for calculating the curvature and deflection direction of the sensing optical fiber according to the stress distribution of each section on each sensing optical fiber, calculating the distribution of the tangent vector of the sensing optical fiber on the length according to the curvature and deflection direction of the sensing optical fiber, and calculating the total form curve of any position of the sensing optical fiber relative to the demodulation system according to the distribution of the tangent vector of the sensing optical fiber on the length.
By detecting the stress distribution on the sensing optical fiber, the curvature and the deflection direction of the sensing optical fiber can be calculated according to the stress distribution of each section on the sensing optical fiber, then the distribution of the tangent vector of the sensing optical fiber on the length can be calculated according to the curvature and the deflection direction, and finally the total form curve of any position of the sensing optical fiber relative to the adjusting system can be calculated according to the distribution of the tangent vector of the sensing optical fiber on the length.
Further, the second obtaining module is further configured to obtain a third spatial position coordinate of the first positioning mark relative to the second obtaining module;
the calculation module corrects the total form curve through the position difference between the first space position coordinate and the third space position coordinate, and obtains the first form curve of the sensing optical fiber at any length relative to the first obtaining module.
It should be noted that the overall shape curve is relative to the demodulation system of the second acquisition module, and not relative to the first acquisition module, and if the positions of the first acquisition module and the second acquisition module are the same, the overall shape curve can be directly called, and if the positions of the first acquisition module and the second acquisition module are different, the overall shape curve needs to be corrected. Specifically, the total shape curve can be corrected through the position difference between the first spatial position coordinate and the third spatial position coordinate, and the first shape curve of the sensing optical fiber at any length relative to the first acquisition module is acquired.
Further, the calculation module calculates a transformation matrix and a translation vector between the coordinate system of the first acquisition module and the coordinate system of the second acquisition module according to a position difference between the first spatial position coordinate and the third spatial position coordinate, and then obtains the first shape curve of the sensing optical fiber at any length relative to the first acquisition module through the transformation matrix, the translation vector and the total shape curve.
Specifically, let the first spatial position coordinate be x1(s0),y1(s0),z1(s0) Wherein s is0Is the length position of the first positioning mark on the whole optical fiber, and the total form curve is x0(s),y0(s),z0(s), where s is the length of the optical fiber, can be passed
Figure BDA0002752000640000041
And (4) performing representation, calculating a transformation matrix R and a translational vector t, and then obtaining a first form curve of the sensing optical fiber at any length relative to the first acquisition module.
Further, the first acquisition module is a depth camera, and the depth camera is rigidly connected with a host of the form sensing device.
In order to ensure the display precision, the depth camera needs to be fixed, in the scheme, the depth camera is rigidly connected with a host of the form sensing device, and in other embodiments, the depth camera can be fixed at other positions.
Further, the first positioning mark and the second positioning mark are marks formed by a group of geometric small points which emit or reflect visible light or infrared light, the distribution of the first positioning mark has at least one resolvable axis direction, and the distribution of the second positioning mark has at least two resolvable axis directions.
Furthermore, the sensing fiber is an MCF multicore fiber, and includes at least three cores spirally distributed on the outer side, and a core penetrating through the center.
In addition, the invention provides a visual presentation method based on the in vivo medical instrument visual presentation system, which comprises the following steps:
setting a first positioning mark on a sensing optical fiber connected with a medical instrument, and setting a second positioning mark on a head-mounted display device;
acquiring a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module in real time;
acquiring a total form curve of the whole sensing optical fiber relative to the second acquisition module in real time;
calculating a first form curve of the sensing optical fiber at any length relative to the first acquisition module according to the first spatial position coordinate and the total form curve;
calculating a second shape curve of the sensing optical fiber at any length relative to the second positioning mark according to the position difference between the first space position coordinate and the second space position coordinate;
acquiring a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left eyeglass and the right eyeglass according to the position difference between the left eyeglass and the right eyeglass of the head-mounted display device and the second positioning mark;
rendering the three-dimensional image of the sensing optical fiber through the third morphological curve and the fourth morphological curve respectively, and outputting image signals to the left eyeglass and the right eyeglass respectively.
The first positioning mark is set on the sensing optical fiber connected with medical instruments such as an endoscope, a surgical robot and the like, the second positioning mark is set on the head-mounted display device for image display, so that a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module can be acquired through a first acquisition module of the shape sensing device, meanwhile, a total shape curve of the whole sensing optical fiber relative to a second acquisition module can be acquired through the second acquisition module, a first shape curve of any length of the sensing optical fiber relative to the first acquisition module can be calculated and acquired according to the first space position coordinate and the total shape curve, a second shape curve of any length of the sensing optical fiber relative to the second positioning mark can be calculated and acquired according to the position difference of the first space position coordinate and the second space position coordinate, according to the position difference between the left lens and the right lens of the head-mounted display device and the second positioning mark, a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left lens and the right lens can be obtained, the third morphological curve and the fourth morphological curve are rendered, and a three-dimensional image of the sensing optical fiber can be displayed on the left lens and the right lens, so that a doctor can visually see the positions of medical instruments such as an endoscope and a surgical robot in a human body after wearing the head-mounted display device, and the operation is facilitated.
Further, the acquiring, in real time, a total shape curve of the entire sensing optical fiber relative to the second acquiring module specifically includes the steps of:
detecting the stress distribution of each section on each sensing optical fiber in real time;
calculating the curvature and deflection direction of the sensing optical fiber according to the stress distribution of each section on each sensing optical fiber;
calculating to obtain the distribution of the tangent vector of the sensing optical fiber on the length according to the curvature and the deflection direction of the sensing optical fiber;
and calculating a total form curve of any position of the sensing optical fiber relative to the second acquisition module according to the distribution of the tangent vector of the sensing optical fiber on the length.
Further, the calculating a first shape curve of the sensing optical fiber at any length relative to the first obtaining module according to the first spatial position coordinate and the total shape curve specifically includes:
acquiring a third spatial position coordinate of the first positioning mark relative to the second acquisition module;
and correcting the total form curve through the position difference between the first space position coordinate and the third space position coordinate to acquire the first form curve of the sensing optical fiber at any length relative to the first acquisition module.
It should be noted that the overall shape curve is relative to the demodulation system of the second acquisition module, and not relative to the first acquisition module, and if the positions of the first acquisition module and the second acquisition module are the same, the overall shape curve can be directly called, and if the positions of the first acquisition module and the second acquisition module are different, the overall shape curve needs to be corrected. Specifically, the total shape curve can be corrected through the position difference between the first spatial position coordinate and the third spatial position coordinate, and the first shape curve of the sensing optical fiber at any length relative to the first acquisition module is acquired.
Specifically, let the first spatial position coordinate be x1(s0),y1(s0),z1(s0) Wherein s is0Is the length position of the first positioning mark on the whole optical fiber, and the total form curve is x0(s),y0(s),z0(s), where s is the length of the optical fiber, can be passed
Figure BDA0002752000640000071
And (4) performing representation, calculating a transformation matrix R and a translational vector t, and then obtaining a first form curve of the sensing optical fiber at any length relative to the first acquisition module.
According to the system and the method for presenting the in-vivo medical instrument vision, the scheme sets a first positioning mark on the sensing optical fiber connected with the medical instrument such as an endoscope, a surgical robot and the like, sets a second positioning mark on the head-mounted display device for displaying images, so that a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to a first acquisition module can be acquired through a first acquisition module of the form sensing device, meanwhile, a total form curve of the whole sensing optical fiber relative to a second acquisition module can be acquired through the second acquisition module, a first form curve relative to the first acquisition module at any length of the sensing optical fiber can be obtained through calculation according to the first space position coordinate and the total form curve, and according to the position difference of the first space position coordinate and the second space position coordinate, the second shape curve of the sensing optical fiber at any length relative to the second positioning mark can be obtained through calculation, the third shape curve and the fourth shape curve of the sensing optical fiber at any position relative to the left eyeglass and the right eyeglass can be obtained according to the position difference between the left eyeglass and the right eyeglass of the head-mounted display device and the second positioning mark, the third shape curve and the fourth shape curve are rendered, the three-dimensional images of the sensing optical fiber can be displayed on the left eyeglass and the right eyeglass, and therefore a doctor can visually see the positions of medical instruments such as an endoscope and a surgical robot in a human body after wearing the head-mounted display device, and the operation is facilitated.
Drawings
The foregoing features, technical features, advantages and embodiments of the present invention will be further explained in the following detailed description of the preferred embodiments, which is to be read in connection with the accompanying drawings.
FIG. 1 is a schematic view of a sensing fiber connection structure according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a display structure of a head-mounted display device according to an embodiment of the invention;
FIG. 3 is a schematic structural diagram of a morphology sensing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic overall flow chart of an embodiment of the present invention.
Reference numbers in the figures: 1-a sensing optical fiber; 2-a form sensing device; 21-a first acquisition module; 22-a second acquisition module; 23-a calculation module; 3-a head mounted display device; 4-a first positioning mark; 5-second alignment marker.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
Example 1
One embodiment of the present invention, as shown in fig. 1 to 3, provides an in-vivo medical device visual presentation system, which includes a sensing optical fiber 1, a form sensing device 2, and a head-mounted display device 3.
The sensing optical fiber 1 is embedded into a flexible insertion part of a medical instrument, and a first positioning mark 4 is set on the sensing optical fiber 1; wear 3 head-mounted display device and be set for second positioning mark 5 on, wear to be provided with left lens and right lens on the display device 3, left lens and right lens and second positioning mark 5's rigidity.
Preferably, the first positioning marks 4 and the second positioning marks 5 are formed by a group of geometric dots which emit or reflect visible light or infrared light, the distribution of the first positioning marks 4 has at least one distinguishable axial direction, and the distribution of the second positioning marks 5 has at least two distinguishable axial directions.
Preferably, the sensing fiber 1 is an MCF multicore fiber including at least three cores spirally distributed on the outer side and one core penetrating through the center.
The form sensing apparatus 2 includes a first obtaining module 21, a second obtaining module 22, and a calculating module 23, and the calculating module 23 is a general calculating module.
The first obtaining module 21 is configured to obtain a first spatial position coordinate and a second spatial position coordinate of the first positioning mark 4 and the second positioning mark 5 relative to the first obtaining module 21, respectively.
Preferably, the first obtaining module 21 is a depth camera, and the depth camera is rigidly connected to the host of the form sensing device 2.
In order to ensure the display accuracy, the depth camera needs to be fixed, in this scheme, the depth camera is rigidly connected with the host of the form sensing device 2, and in other embodiments, the depth camera can be fixed at other positions.
The second acquiring module 22 is used for acquiring a total shape curve of the whole sensing fiber 1 relative to the second acquiring module 22.
The calculation module 23 is configured to calculate a first morphological curve of any length of the sensing optical fiber 1 relative to the first acquisition module 21 according to the first spatial position coordinate and the total morphological curve, calculate a second morphological curve of any length of the sensing optical fiber 1 relative to the second positioning mark 5 according to a position difference between the first spatial position coordinate and the second spatial position coordinate, and acquire a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber 1 relative to the left eyeglass and the right eyeglass according to a position difference between the left eyeglass and the right eyeglass of the head-mounted display device 3 and the second positioning mark 5, respectively; meanwhile, the calculation module 23 renders the stereoscopic image of the sensing fiber 1 through the third morphological curve and the fourth morphological curve, and outputs the image signal to the left eyeglass and the right eyeglass, respectively.
The scheme sets a first positioning mark 4 on a sensing optical fiber 1 connected with medical instruments such as an endoscope, a surgical robot and the like, sets a second positioning mark 5 on a head-mounted display device 3 for image display, so that a first space position coordinate and a second space position coordinate of the first positioning mark 4 and the second positioning mark 5 relative to the first acquisition module 21 can be acquired through a first acquisition module 21 of a form sensing device 2, meanwhile, a total form curve of the whole sensing optical fiber 1 relative to a second acquisition module 22 can be acquired through a second acquisition module 22, a first form curve of any length of the sensing optical fiber 1 relative to the first acquisition module 21 can be calculated and acquired according to the first space position coordinate and the total form curve, a second form curve of any length of the sensing optical fiber 1 relative to the second positioning mark 5 can be calculated and acquired according to the position difference of the first space position coordinate and the second space position coordinate, according to the position difference between the left lens and the right lens of the head-mounted display device 3 and the second positioning mark 2, a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber 1 relative to the left lens and the right lens can be obtained, the third morphological curve and the fourth morphological curve are rendered, and a three-dimensional image of the sensing optical fiber 1 can be displayed on the left lens and the right lens, so that a doctor can visually see the positions of medical instruments such as an endoscope and a surgical robot in a human body after wearing the head-mounted display device 3, and the operation is facilitated.
Example 2
In an embodiment of the present invention, based on embodiment 1, the second acquisition module 22 includes an optical fiber sensor and a demodulation system.
The optical fiber sensor is used for detecting the stress distribution of each section on each sensing optical fiber 1 in real time, the demodulation system is used for calculating the curvature and the deflection direction of the sensing optical fiber according to the stress distribution of each section on each sensing optical fiber, calculating the distribution of the tangent vector of the sensing optical fiber on the length according to the curvature and the deflection direction of the sensing optical fiber, and calculating the total form curve of any position of the sensing optical fiber relative to the demodulation system according to the distribution of the tangent vector of the sensing optical fiber on the length.
By detecting the stress distribution on the sensing optical fiber 1, the curvature and the deflection direction of the sensing optical fiber can be calculated according to the stress distribution of each section on the sensing optical fiber, then the distribution of the tangent vector of the sensing optical fiber on the length can be calculated according to the curvature and the deflection direction, and finally the total form curve of any position of the sensing optical fiber relative to the adjusting system can be calculated according to the distribution of the tangent vector of the sensing optical fiber on the length.
Example 3
In an embodiment of the present invention, on the basis of embodiment 1 or 2, the second obtaining module 22 is further configured to obtain a third spatial position coordinate of the first location mark 4 relative to the second obtaining module 22.
The calculating module 23 corrects the total shape curve by the position difference between the first spatial position coordinate and the third spatial position coordinate, and obtains a first shape curve of the sensing fiber 1 at any length relative to the first obtaining module 21.
It should be noted that the overall shape curve is relative to the demodulation system of the second acquisition module 22, and not relative to the first acquisition module 21, and if the positions of the first acquisition module 21 and the second acquisition module 22 are the same, it can be called directly, and if they are different, it needs to be corrected. Specifically, the total shape curve may be corrected by a position difference between the first spatial position coordinate and the third spatial position coordinate, so as to obtain a first shape curve of the sensing optical fiber 1 at any length relative to the first obtaining module 21.
Preferably, the calculating module 23 first obtains a transformation matrix and a translation vector between the coordinate system of the first obtaining module 21 and the coordinate system of the second obtaining module 22 through calculating a position difference between the first spatial position coordinate and the third spatial position coordinate, and then obtains a first shape curve of the sensing optical fiber 1 at any length relative to the first obtaining module 21 through the transformation matrix, the translation vector and the total shape curve.
Specifically, let the first spatial position coordinate be x1(s0),y1(s0),z1(s0) Wherein s is0Is the length position of the first positioning mark on the whole optical fiber, and the total form curve is x0(s),y0(s),z0(s), where s is the length of the optical fiber, can be passed
Figure BDA0002752000640000111
By performing the expression, the transformation matrix R and the translational vector t are calculated, and then the first shape curve of the sensing fiber 1 at any length relative to the first acquisition module 21 can be obtained.
Example 4
An embodiment of the present invention, as shown in fig. 4, on the basis of any of the above embodiments, the present invention further provides a method for visually presenting an in vivo medical device, including the steps of:
s1, a first positioning mark is set on the sensing optical fiber connected to the medical instrument, and a second positioning mark is set on the head-mounted display device.
And S2, acquiring the first space position coordinate and the second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module in real time.
And S3, acquiring the total shape curve of the whole sensing optical fiber relative to the second acquisition module in real time.
And S4, calculating a first shape curve of the sensing optical fiber relative to the first acquisition module at any length according to the first space position coordinate and the total shape curve.
And S5, calculating a second shape curve of the sensing optical fiber at any length relative to the second positioning mark according to the position difference of the first space position coordinate and the second space position coordinate.
And S6, acquiring a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left eyeglass and the right eyeglass according to the position difference between the left eyeglass and the right eyeglass of the head-mounted display device and the second positioning mark.
And S7, rendering the three-dimensional image of the sensing optical fiber through the third morphological curve and the fourth morphological curve respectively, and outputting image signals to the left eye lens and the right eye lens respectively.
The first positioning mark is set on the sensing optical fiber connected with medical instruments such as an endoscope, a surgical robot and the like, the second positioning mark is set on the head-mounted display device for image display, so that a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module can be acquired through a first acquisition module of the shape sensing device, meanwhile, a total shape curve of the whole sensing optical fiber relative to a second acquisition module can be acquired through the second acquisition module, a first shape curve of any length of the sensing optical fiber relative to the first acquisition module can be calculated and acquired according to the first space position coordinate and the total shape curve, a second shape curve of any length of the sensing optical fiber relative to the second positioning mark can be calculated and acquired according to the position difference of the first space position coordinate and the second space position coordinate, according to the position difference between the left lens and the right lens of the head-mounted display device and the second positioning mark, a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left lens and the right lens can be obtained, the third morphological curve and the fourth morphological curve are rendered, and a three-dimensional image of the sensing optical fiber can be displayed on the left lens and the right lens, so that a doctor can make an image in the lens coincide with the actual spatial structure of the sensing optical fiber in a patient body after wearing the head-mounted display device and moving the visual field to the patient body, and further visually see the positions of medical instruments such as an endoscope and an operation robot in the human body, and the operation is facilitated.
Example 5
In an embodiment of the present invention, on the basis of embodiment 4, a total shape curve of the entire sensing optical fiber with respect to the second acquisition module is acquired in real time, which specifically includes the steps of:
and S31, detecting the stress distribution of each section on each sensing optical fiber in real time.
And S32, calculating the curvature and deflection direction of the sensing optical fiber according to the stress distribution of each section on each sensing optical fiber.
And S33, calculating and obtaining the distribution of the tangent vector of the sensing optical fiber on the length according to the curvature and the deflection direction of the sensing optical fiber.
And S34, calculating a total shape curve of the arbitrary position of the sensing optical fiber relative to the second acquisition module according to the distribution of the tangent vector of the sensing optical fiber on the length.
Preferably, the calculating a first shape curve of the sensing fiber at any length relative to the first acquisition module according to the first spatial position coordinate and the total shape curve specifically includes:
and S41, acquiring a third space position coordinate of the first positioning mark relative to the second acquisition module.
And S42, correcting the total form curve through the position difference of the first space position coordinate and the third space position coordinate, and acquiring the first form curve of the sensing optical fiber at any length relative to the first acquisition module.
It should be noted that the overall shape curve is relative to the demodulation system of the second acquisition module, and not relative to the first acquisition module, and if the positions of the first acquisition module and the second acquisition module are the same, the overall shape curve can be directly called, and if the positions of the first acquisition module and the second acquisition module are different, the overall shape curve needs to be corrected. Specifically, the total shape curve can be corrected through the position difference between the first spatial position coordinate and the third spatial position coordinate, and the first shape curve of the sensing optical fiber at any length relative to the first acquisition module is acquired.
Specifically, let the first spatial position coordinate be x1(s0),y1(s0),z1(s0) Wherein s is0Is the length position of the first positioning mark on the whole optical fiber, and the total form curve is x0(s),y0(s),z0(s), where s is the length of the optical fiber, can be passed
Figure BDA0002752000640000141
And (4) performing representation, calculating a transformation matrix R and a translational vector t, and then obtaining a first form curve of the sensing optical fiber at any length relative to the first acquisition module.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An in-vivo medical device visual presentation system, comprising:
the sensing optical fiber is embedded into the flexible insertion part of the medical instrument, and a first positioning mark is set on the sensing optical fiber;
a form sensing device;
the head-mounted display device is provided with a second positioning mark;
wherein, the form perception device comprises:
the first acquisition module is used for respectively acquiring a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module;
the second acquisition module is used for acquiring a total form curve of the whole sensing optical fiber relative to the second acquisition module;
a calculation module, configured to calculate a first morphological curve of any length of the sensing optical fiber relative to the first acquisition module according to the first spatial position coordinate and the total morphological curve, calculate a second morphological curve of any length of the sensing optical fiber relative to the second positioning mark according to a position difference between the first spatial position coordinate and the second spatial position coordinate, and acquire a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left-eye lens and the right-eye lens according to a position difference between the left-eye lens and the right-eye lens of the head-mounted display device and the second positioning mark, respectively;
and the calculation module renders the three-dimensional image of the sensing optical fiber through the third morphological curve and the fourth morphological curve respectively and outputs image signals to the left eyeglass and the right eyeglass respectively.
2. The in vivo medical device visual presentation system of claim 1, wherein: the second acquisition module comprises a fiber sensor and a demodulation system,
the optical fiber sensor is used for detecting the stress distribution of each section on each sensing optical fiber in real time,
the demodulation system is used for calculating the curvature and deflection direction of the sensing optical fiber according to the stress distribution of each section on each sensing optical fiber, calculating the distribution of the tangent vector of the sensing optical fiber on the length according to the curvature and deflection direction of the sensing optical fiber, and calculating the total form curve of any position of the sensing optical fiber relative to the demodulation system according to the distribution of the tangent vector of the sensing optical fiber on the length.
3. The in vivo medical device visual presentation system of claim 1, wherein: the second acquisition module is further configured to acquire a third spatial position coordinate of the first positioning mark relative to the second acquisition module;
the calculation module corrects the total form curve through the position difference between the first space position coordinate and the third space position coordinate, and obtains the first form curve of the sensing optical fiber at any length relative to the first obtaining module.
4. The in vivo medical device visual presentation system of claim 3, wherein: the calculation module calculates a transformation matrix and a translation vector between the coordinate system of the first acquisition module and the coordinate system of the second acquisition module according to the position difference between the first spatial position coordinate and the third spatial position coordinate, and then obtains the first form curve of the sensing optical fiber relative to the first acquisition module at any length through the transformation matrix, the translation vector and the total form curve.
5. The in vivo medical device visual presentation system of claim 1, wherein: the first acquisition module is a depth camera, and the depth camera is rigidly connected with a host of the form sensing device.
6. The in vivo medical device visual presentation system of claim 1, wherein: the first positioning marks and the second positioning marks are marks formed by a group of geometric figure small points which emit or reflect visible light or infrared light, the distribution of the first positioning marks has at least one resolvable axis direction, and the distribution of the second positioning marks has at least two resolvable axis directions.
7. The in vivo medical device visual presentation system of claim 1, wherein: the sensing optical fiber is an MCF multi-core optical fiber and comprises at least three fiber cores which are spirally distributed on the outer side and one fiber core which penetrates through the center.
8. A visual presentation method based on the in-vivo medical device visual presentation system according to any one of claims 1 to 7, comprising the steps of:
setting a first positioning mark on a sensing optical fiber connected with a medical instrument, and setting a second positioning mark on a head-mounted display device;
acquiring a first space position coordinate and a second space position coordinate of the first positioning mark and the second positioning mark relative to the first acquisition module in real time;
acquiring a total form curve of the whole sensing optical fiber relative to the second acquisition module in real time;
calculating a first form curve of the sensing optical fiber at any length relative to the first acquisition module according to the first spatial position coordinate and the total form curve;
calculating a second shape curve of the sensing optical fiber at any length relative to the second positioning mark according to the position difference between the first space position coordinate and the second space position coordinate;
acquiring a third morphological curve and a fourth morphological curve of any position of the sensing optical fiber relative to the left eyeglass and the right eyeglass according to the position difference between the left eyeglass and the right eyeglass of the head-mounted display device and the second positioning mark;
rendering the three-dimensional image of the sensing optical fiber through the third morphological curve and the fourth morphological curve respectively, and outputting image signals to the left eyeglass and the right eyeglass respectively.
9. The visual presentation method according to claim 8, wherein said acquiring a total shape curve of the entire sensing optical fiber with respect to the second acquisition module in real time specifically comprises the steps of:
detecting the stress distribution of each section on each sensing optical fiber in real time;
calculating the curvature and deflection direction of the sensing optical fiber according to the stress distribution of each section on each sensing optical fiber;
calculating to obtain the distribution of the tangent vector of the sensing optical fiber on the length according to the curvature and the deflection direction of the sensing optical fiber;
and calculating a total form curve of any position of the sensing optical fiber relative to the second acquisition module according to the distribution of the tangent vector of the sensing optical fiber on the length.
10. The visual presentation method of claim 8, wherein said calculating a first profile curve of said sensing fiber at any length with respect to said first acquisition module based on said first spatial location coordinates and said overall profile curve comprises:
acquiring a third spatial position coordinate of the first positioning mark relative to the second acquisition module;
and correcting the total form curve through the position difference between the first space position coordinate and the third space position coordinate to acquire the first form curve of the sensing optical fiber at any length relative to the first acquisition module.
CN202011188144.6A 2020-10-30 2020-10-30 System and method for presenting medical instrument vision in vivo Active CN112263331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011188144.6A CN112263331B (en) 2020-10-30 2020-10-30 System and method for presenting medical instrument vision in vivo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011188144.6A CN112263331B (en) 2020-10-30 2020-10-30 System and method for presenting medical instrument vision in vivo

Publications (2)

Publication Number Publication Date
CN112263331A CN112263331A (en) 2021-01-26
CN112263331B true CN112263331B (en) 2022-04-05

Family

ID=74345420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011188144.6A Active CN112263331B (en) 2020-10-30 2020-10-30 System and method for presenting medical instrument vision in vivo

Country Status (1)

Country Link
CN (1) CN112263331B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113349929B (en) * 2021-05-21 2022-11-11 清华大学 Spatial positioning system for distal locking hole of intramedullary nail

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1225454A2 (en) * 2001-01-03 2002-07-24 Carl Zeiss Method and device for fixing a position
CN102753092A (en) * 2010-02-09 2012-10-24 皇家飞利浦电子股份有限公司 Apparatus, system and method for imaging and treatment using optical position sensing
CN111265299A (en) * 2020-02-19 2020-06-12 上海理工大学 Operation navigation method based on optical fiber shape sensing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201578365U (en) * 2009-12-31 2010-09-15 杨晓峰 Surgical oncology intraoperative fluorescent navigation system
IL221863A (en) * 2012-09-10 2014-01-30 Elbit Systems Ltd Digital system for surgical video capturing and display
TW201505603A (en) * 2013-07-16 2015-02-16 Seiko Epson Corp Information processing apparatus, information processing method, and information processing system
US11278354B2 (en) * 2015-09-10 2022-03-22 Intuitive Surgical Operations, Inc. Systems and methods for using tracking in image-guided medical procedure
EP3445048A1 (en) * 2017-08-15 2019-02-20 Holo Surgical Inc. A graphical user interface for a surgical navigation system for providing an augmented reality image during operation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1225454A2 (en) * 2001-01-03 2002-07-24 Carl Zeiss Method and device for fixing a position
CN102753092A (en) * 2010-02-09 2012-10-24 皇家飞利浦电子股份有限公司 Apparatus, system and method for imaging and treatment using optical position sensing
CN111265299A (en) * 2020-02-19 2020-06-12 上海理工大学 Operation navigation method based on optical fiber shape sensing

Also Published As

Publication number Publication date
CN112263331A (en) 2021-01-26

Similar Documents

Publication Publication Date Title
US20180116732A1 (en) Real-time Three Dimensional Display of Flexible Needles Using Augmented Reality
US11529197B2 (en) Device and method for tracking the position of an endoscope within a patient's body
JP6458732B2 (en) Image processing apparatus, image processing method, and program
CN108601628B (en) Navigation, tracking and guidance system for positioning a working instrument in a patient's body
CN101455554B (en) Endoscope system
US9636188B2 (en) System and method for 3-D tracking of surgical instrument in relation to patient body
US6937268B2 (en) Endoscope apparatus
JP5137033B2 (en) Surgery support information display device, surgery support information display method, and surgery support information display program
JP6116754B2 (en) Device for stereoscopic display of image data in minimally invasive surgery and method of operating the device
CN106560163B (en) The method for registering of operation guiding system and operation guiding system
CN107182200B (en) minimally invasive surgery navigation system
CN109561810B (en) Endoscopic apparatus and method for endoscopy
LU500127B1 (en) Enhanced augmented reality headset for medical imaging
Reiter et al. Surgical structured light for 3D minimally invasive surgical imaging
JP2016158911A5 (en)
CN112263331B (en) System and method for presenting medical instrument vision in vivo
CN113349928B (en) Augmented reality surgical navigation device for flexible instrument
US20140253685A1 (en) Medical apparatus
WO2017170488A1 (en) Optical axis position measuring system, optical axis position measuring method, optical axis position measuring program, and optical axis position measuring device
TWI697317B (en) Digital image reality alignment kit and method applied to mixed reality system for surgical navigation
EP4177664A1 (en) Program, information processing method, and endoscope system
EP3944254A1 (en) System for displaying an augmented reality and method for generating an augmented reality
JP3780146B2 (en) Surgical navigation device
CN220735375U (en) Endoscope and head end piece thereof
Cui et al. Using a bi-prism endoscopic system for three-dimensional measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant