CN114191078B - Endoscope operation navigation robot system based on mixed reality - Google Patents

Endoscope operation navigation robot system based on mixed reality Download PDF

Info

Publication number
CN114191078B
CN114191078B CN202111641934.XA CN202111641934A CN114191078B CN 114191078 B CN114191078 B CN 114191078B CN 202111641934 A CN202111641934 A CN 202111641934A CN 114191078 B CN114191078 B CN 114191078B
Authority
CN
China
Prior art keywords
endoscope
image
navigation
images
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111641934.XA
Other languages
Chinese (zh)
Other versions
CN114191078A (en
Inventor
刘东麟
安涌
张翠华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fudan Digital Medical Technology Co ltd
Original Assignee
Shanghai Fudan Digital Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fudan Digital Medical Technology Co ltd filed Critical Shanghai Fudan Digital Medical Technology Co ltd
Priority to CN202111641934.XA priority Critical patent/CN114191078B/en
Publication of CN114191078A publication Critical patent/CN114191078A/en
Application granted granted Critical
Publication of CN114191078B publication Critical patent/CN114191078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

The invention discloses an endoscope operation navigation robot system based on mixed reality, which comprises a mixed reality operation navigation module and an endoscope operation robot module; the endoscope operation robot module is used for clamping an endoscope and adjusting the space position of the endoscope to acquire an endoscope image; the mixed reality operation navigation module is used for volume drawing of image guiding space images, and fusing the endoscope images with the image guiding space images to obtain virtual-real combined navigation images. The invention can provide effective navigation information to assist doctors to better master the surrounding environment of the operation area, distinguish tissue structures and position pathological tissues, so that the operation becomes safer and more effective.

Description

Endoscope operation navigation robot system based on mixed reality
Technical Field
The invention belongs to the technical field of medical instruments, and particularly relates to an endoscopic surgery navigation robot system based on mixed reality.
Background
Compared with the traditional surgical operation, the minimally invasive operation has the advantages of small wound, light pain, quick recovery and the like. Endoscopic surgery is one of the common minimally invasive surgery and has important significance in relieving pain of patients, and it is this small trauma that brings some new difficulties to the surgery. Because of the small surgical field, doctors have difficulty in grasping the condition of the surgical field, and because the lens is close to the tissue, even if the field angle of the endoscope is designed to be very large, the field range of the endoscope is only a few centimeters. Meanwhile, because the operation space is narrow, a special operation tool is needed to enter an operation area through a special operation channel, and the operation difficulty is extremely high due to the limitation of the channel and the space.
The limitations of endoscopes in view and operating space have created significant challenges for the surgeon performing the procedure. Therefore, the computer-aided surgery technique has better application prospect in endoscopic surgery. The preoperative image forms an image guiding space, and based on the real-time positioning and tracking of the image guiding space, the minimally invasive, accurate and time-saving operation can be realized. Currently, the surgical navigation system uses a virtual-real separation system based on a virtual reality technology, and the mode of spatial separation of the navigation space and the patient makes a doctor need to frequently switch the field of view between an image guiding space and a real surgical site, so that a positioning error is inevitably generated, and great inconvenience is caused to the surgery.
One effective way to alleviate the operation difficulty is to use a mixed reality technology, which is a new technology of mutually supplementing and superposing real world information and virtual world information and seamlessly integrating, and can fuse the information of an image guiding space with the real information of a patient to form a virtual-real combined navigation system. With the display, the real world and the virtual graphics are superimposed, and in addition to the image corresponding to the real endoscope, the surrounding tissue structure information can be observed.
Disclosure of Invention
In order to solve the problems, the invention provides an endoscope operation navigation robot system based on mixed reality, which is used for relieving the problem of positioning errors caused by the difference between an image guiding space and a real operation scene in the existing endoscope operation navigation, avoiding the judgment errors of doctors on operation parts and surrounding structural tissues and enabling the operation to be performed more efficiently and safely.
In order to achieve the above object, the present invention provides an endoscopic surgery navigation robot system based on mixed reality, comprising, a mixed reality surgery navigation module and an endoscopic surgery robot module;
The endoscope operation robot module is used for clamping an endoscope and adjusting the space position of the endoscope to acquire an endoscope image;
The mixed reality operation navigation module is used for volume drawing of image guiding space images, and based on the endoscope images and the image guiding space images, virtual-real combined navigation images are obtained.
Optionally, the endoscopic surgical robot module includes a robot carrying base, a control cabinet, a mechanical arm, a movable jaw, and an endoscope;
the robot bearing base is used for placing the mechanical arm;
the control cabinet is used for controlling the movement of the mechanical arm;
the mechanical arm is used for changing the spatial position and the gesture of the surgical instrument and is connected with the movable clamping jaw;
The movable clamping jaw is used for fixing the endoscope and transferring the endoscope to a designated position;
The endoscope comprises a surgical channel and an endoscope camera;
The endoscope camera is used for collecting an operation position image and obtaining an endoscope image.
Optionally, the mixed reality surgical navigation module comprises an optical tracker, a rigid support, an optical positioner, a navigator host and a navigation image display;
the optical tracker is used for reflecting infrared rays;
the rigid support is used for placing the optical tracker on the patient body and the endoscope;
The optical positioning instrument is used for emitting infrared rays, receiving the infrared rays reflected by the optical tracker and acquiring real-time positions of the patient body and the endoscope;
the navigator host is used for volume drawing the image guiding space image on an endoscope imaging plane according to the real-time positions of the patient body and the endoscope, and based on the endoscope image and the image guiding space image, a virtual-real combined navigation image is obtained;
The navigation image display is used for displaying the integrated virtual-real combined navigation image.
Optionally, the method for volume rendering the image-guided spatial image includes:
Non-rigid registration of the intra-operative ultrasound image and the pre-operative MR/CT image of the patient's body, obtaining an original image-guided space of the patient's body, the original image-guided space containing a real-time location of the patient's body;
correcting the real-time position of the endoscope;
Based on the corrected position of the endoscope and the real-time position of the patient body, obtaining the relative position relation between the endoscope and the original image guiding space image;
And performing volume rendering on the original image guiding space on the endoscope imaging plane based on the relative position relation to acquire the image guiding space image.
Optionally, the intra-operative ultrasound image is non-rigidly registered with the pre-operative MR/CT image using a cross-modality image registration method based on a atlas-rolling neural network.
Optionally, the method for correcting the real-time position of the endoscope comprises the following steps:
And calculating poles from the endoscopic images and the virtual endoscopic images obtained by shooting from two viewpoints by adopting a epipolar geometry method, and compensating the rotation errors of the endoscopic images and the virtual endoscope by comparing the angular relations between the poles and the upper directions corresponding to the endoscopic images and the virtual endoscopic images to obtain the corrected position of the endoscope.
Optionally, the relative positional relationship expression of the endoscope and the original image guiding space is that;
wherein T pw is the transformation relation from the original image guiding space to the navigation system space, T wt is the transformation relation from the navigation system space to the endoscopic surgery space, For a fixed relative positional transformation of an optical tracker on a patient's body with the patient's body,/>For the fixed relative position conversion relation of the optical tracker on the endoscope and the endoscope,/>For the spatial position transformation relationship of an optical tracker and a navigation system on the body of a patient,/>The T pt is the relative position relation between the endoscope and the original image guiding space.
Optionally, the method for obtaining the virtual-real combined navigation image includes:
The endoscope image and the image guiding space image generated by volume drawing are linearly overlapped on an endoscope imaging plane according to a proportion formula,
The specific gravity formula is as follows:
g(x)=(1-α)f0(x)+αf1(x)
Where f 0 (x) is a volume rendering result mapping function, f 1 (x) is an endoscopic image mapping function, and α is a specific gravity for controlling a volume rendering image and an endoscopic image.
Compared with the prior art, the invention has the following advantages and technical effects:
The invention provides an endoscope operation navigation robot system based on mixed reality, which combines an endoscope operation robot with an operation navigation system, fuses operation navigation information with an endoscope operation picture, realizes a mixed reality effect, helps doctors to better master surrounding blood vessels, organs and other tissue structure information and lesion tissue information in the endoscope operation, and simultaneously focuses on a real operation picture to operate, so that operators can have better visual field and discrimination, and the operation process is safer and more efficient.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
fig. 1 is a schematic structural diagram of an endoscopic surgery navigation robot system based on mixed reality according to an embodiment of the present invention;
Fig. 2 is a schematic workflow diagram of an endoscopic surgery navigation robot system based on mixed reality according to an embodiment of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Examples
As shown in fig. 1, the present embodiment provides an endoscopic surgery navigation robot system based on mixed reality, which includes a mixed reality surgery navigation module and an endoscopic surgery robot module;
The endoscope operation robot module comprises a robot bearing base, a control cabinet, a mechanical arm, a movable clamping jaw and an endoscope; the robot bearing base is used for placing the mechanical arm; the control cabinet is used for controlling the movement of the mechanical arm; the mechanical arm is used for changing the space position and the gesture of the surgical instrument and is connected with the movable clamping jaw; the movable clamping jaw is used for fixing the endoscope and transferring the endoscope to a designated position; the endoscope comprises a surgical channel and an endoscope camera; the endoscope camera is used for acquiring an operation site image and obtaining an endoscope image.
In this embodiment, the robot bearing base is used for placing the whole robot, and the control cabinet is used for receiving the control instruction, performing kinematic calculation on the control instruction, and controlling the rotation angle of the mechanical arm joint, so that the tail end of the mechanical arm joint reaches the corresponding position. The mechanical arm is used for changing the space position and the gesture of the surgical instrument, the clamping jaw is used for clamping the endoscope, and the endoscope is sent to the target position. The endoscope comprises a surgical channel and an endoscope camera in the channel, wherein the endoscope camera is used for acquiring images of a surgical site.
The mixed reality operation navigation module comprises an optical tracker, a rigid support, an optical positioning instrument, a navigator host and a navigation image display; the optical tracker is used for reflecting infrared rays; the rigid support is used for fixedly placing the optical tracker on the body of a patient and the endoscope; the optical positioning instrument is used for emitting infrared light rays and receiving the infrared light rays reflected by the optical tracker to acquire the real-time positions of the body of the patient and the endoscope; the navigator host is used for volume drawing the image guiding space image on an endoscope imaging plane according to the real-time position of the patient body and the endoscope, and based on the endoscope image and the image guiding space image, a virtual-real combined navigation image is obtained; the navigation image display is used for displaying the integrated virtual-real combined navigation image.
In this embodiment, the optical tracker in the mixed reality module is a reflective ball capable of reflecting infrared light, and is used for reflecting infrared light emitted by the optical positioning device, one bracket is placed at a fixed position on the patient body for guiding the spatial image to be used as a reference, the other bracket is mounted on the endoscope for positioning the corresponding position of the endoscope in the navigation space, the optical positioning device receives the infrared light reflected by the optical tracker, and performs three-dimensional spatial positioning on the optical tracker, the optical tracker is mounted on a rigid bracket, and the rigid bracket is fixedly assembled on the surgical instrument and the patient body. By using rigid body transformations on different coordinate systems, rough spatial positions of the surgical instrument and the patient are obtained, after which the spatial positions of the patient and the surgical instrument are corrected and registered. The host provides real-time spatial localization, navigation image registration, and computational resources in the fusion of endoscopic images and image guided spatial images. The navigation image display is used for displaying the fusion result of the endoscope image and the image guiding space image.
Further, in the navigator host, the method for volume rendering the image-guided spatial image includes: non-rigid registration of the intra-operative ultrasound image and the pre-operative MR/CT image of the patient's body, obtaining an original image-guided space of the patient's body, the original image-guided space containing a real-time location of the patient's body; correcting the real-time position of the endoscope; based on the corrected position of the endoscope and the real-time position of the patient body, obtaining the relative position relation between the endoscope and the original image guiding space image; and performing volume rendering on the original image guiding space on the endoscope imaging plane based on the relative position relation to acquire an image guiding space image.
Wherein, a cross-mode image registration method based on a graph-convolution neural network is used for non-rigid registration of an intraoperative ultrasonic image and a preoperative MR/CT image.
The method for correcting the real-time position of the endoscope comprises the following steps: and calculating poles from the endoscopic images and the virtual endoscopic images obtained by shooting from two viewpoints by adopting a epipolar geometry method, and compensating the rotation errors of the endoscopic images and the virtual endoscope by comparing the angular relations between the poles and the upper directions corresponding to the endoscopic images and the virtual endoscopic images to obtain the corrected position of the endoscope.
Wherein, the relative position relation expression of the endoscope and the original image guiding space is as follows;
wherein T pw is the transformation relation from the original image guiding space to the navigation system space, T wt is the transformation relation from the navigation system space to the endoscopic surgery space, For a fixed relative positional transformation of an optical tracker on a patient's body with the patient's body,/>For the fixed relative position conversion relation of the optical tracker on the endoscope and the endoscope,/>For the spatial position transformation relationship of an optical tracker and a navigation system on the body of a patient,/>The T pt is the relative position relation between the endoscope and the original image guiding space.
Further, the method for obtaining the navigation image combining the virtual and the real comprises the following steps: linearly superposing the endoscope image and the image-guided space image generated by volume rendering on an endoscope imaging plane according to a specific gravity formula;
The specific gravity formula is as follows:
g(x)=(1-α)f0(x)+αf1(x)
Where f 0 (x) is a volume rendering result mapping function, f 1 (x) is an endoscopic image mapping function, and α is a specific gravity for controlling a volume rendering image and an endoscopic image.
As shown in fig. 2, a workflow of an endoscopic surgery navigation robot system based on mixed reality is provided in the present embodiment;
installing an optical tracker and a rigid support as required, starting a navigation system, starting an optical positioning instrument to track the optical tracker on the endoscope and the patient body in real time, acquiring the real-time positions of the patient body and the endoscope, and utilizing the rigid support to have a fixed position transformation relation with the patient body and the endoscope body respectively And/>The optical tracker is approximately rigidly connected to the endoscope and can therefore be regarded as a fixed relative positional relationship/>The optical tracker and the patient body are calibrated during preoperative image diagnosis, so that the optical tracker and the patient body can be roughly regarded as having a fixed relative position transformation relationship/>The position and posture transformation matrixes of the two can be obtained respectively through rigid transformation, the characteristic points in the guiding space of the original image are transformed into the endoscope space, and the transformation process/>, according to the guiding space of the original image, of the original image into the navigation system space where the optical positioning instrument is positioned, is carried outAnd a transformation process/>, of the navigation system space where the optical locator is located, into the endoscopic surgery spaceCan obtain the transformation from the guiding space of the original image to the endoscopic surgery space For subsequent image-guided spatial imaging in volume rendering of the endoscopic space.
In order to ensure the accuracy of the guiding space of the original image, the invention uses the ultrasonic imaging result in operation and the MR/CT image before operation to carry out non-rigid registration to obtain the guiding space of the original image. A cross-modality image registration method based on a graph-roll neural network (GCN) is used to register intra-operative ultrasound images with pre-operative MR/CT images of an image-guided space. Sampling an ultrasonic imaging result and a preoperative MR/CT image pixel according to morphological characteristics, respectively converting the ultrasonic imaging result and the preoperative MR/CT image pixel into nodes (nodes) through a K nearest neighbor method (KNNs) and having own attributes, generating edges (edges) between the nodes by using a generator based on a transducer structure, and performing corresponding relation prediction by using a graph rolling network (GCN) extraction characteristic after generating a graph, wherein the edge generator and the graph rolling neural network use a shared weight. The error between the image guiding space and the real operation scene is reduced through registration optimization.
The optical tracker on the endoscope is influenced by the surgical instrument and can rotate, so that the generated endoscope positioning and the generated real positioning have great difference, and in order to accurately capture the space pose of the endoscope, the error of volume drawing operation generated by rotation of the endoscope is reduced, and the rotation error of the endoscope image and the virtual endoscope needs to be compensated. An epipolar geometry method is adopted. Poles are calculated from the endoscopic image and the virtual endoscopic image obtained by capturing images from the two viewpoints C and C'.
Let the transformation matrices of the two cameras be P and P ', respectively, and the images I and I' corresponding to the same spatial position X are x=px and X '=p' X, respectively.
The two camera positions C and C ' are imaged on the opposite image plane e and e ', respectively, the lines connecting the imaging points of the sub-plane X are called lines l and l ', respectively, and the relationship between the imaging points and the opposite line can be described as: l' =fx, F is called the basis matrix.
Looking at a datum point passing through the image I, namely a straight line m of an intersection point o of a visual axis and an image plane and a pole e, when the visual axis and the visual point of the virtual endoscope are consistent, the upper direction of a picture is equal to the angle of the straight line m, and when a rotation error exists, a gap exists between a corresponding angle beta of the virtual endoscope and a corresponding angle alpha of the real endoscope.
For an endoscope camera, extracting characteristic points of an image by using a KLT method, and then calculating a pole point e from a basic matrix. And a RANSAC (RANdom Sample Consensus) algorithm with better robustness is adopted to solve the basic matrixes F and F' of the two views. The pole e is physically zero core of the basic matrix F, i.e. fe=0, and the pole e satisfying fe=0 is obtained after SVD singular value decomposition. Then, an angle α of a vector from the reference point o to the pole e with respect to the upward direction is calculated, and then an angle β of a vector from the reference point o 'to the pole e' of the corresponding virtual endoscopic image with respect to the upward direction is calculated.
And obtaining rotation errors of the virtual endoscope upper direction and the real endoscope upper direction from θ=β - α, and correcting the endoscope position and direction information.
According to the principle of projective geometry, the image under the guiding space of the original image is subjected to volume rendering on an endoscope imaging plane, and the output image is calculated by accumulating the opacity and gray values of a plurality of sample points in the sight line direction. Using the gray value C i up to the sample point i and the opacity β i up to the sample point i, the following formula is used:
Ci=Ci-1i·αi·ci
βi=βi-1·(1-αi)
and obtaining a volume rendering result of the image on the imaging plane of the endoscope camera under the image guiding space.
The method comprises the following steps of linearly superposing an actual image shot by an endoscope camera, namely an endoscope image, with an image under an image guiding space generated by volume rendering on an imaging plane of the endoscope camera according to a specific gravity formula: g (x) = (1-alpha) f 0(x)+αf1(x),f0 (x) is a mapping function of volume rendering results, f 1 (x) is a mapping function of real images of the endoscope, alpha is used for controlling the proportion of the volume rendering images to the real images of the endoscope, and simultaneously, in order to prevent reflection of light of irradiated tissues in an image picture of the real images of the endoscope, the contrast reduction is affected by the volume rendering due to overlarge brightness of the real images of the endoscope, alpha is dynamically adjusted according to brightness components L in a Lab color space of CIE standard, so that the definition and contrast of a field of view are ensured.
And finally, the fused picture enables a doctor to simultaneously observe the real-time picture of the operation area shot by the endoscope camera, the tissue structure around the operation area and the position of lesion tissues in the image guiding space, avoid the tissues which are easy to damage, and accurately position and operate the lesion target area.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (5)

1. An endoscope operation navigation robot system based on mixed reality is characterized by comprising a mixed reality operation navigation module and an endoscope operation robot module;
The endoscope operation robot module is used for clamping an endoscope and adjusting the space position of the endoscope to acquire an endoscope image;
The mixed reality operation navigation module is used for volume drawing of image guiding space images, and based on the endoscope images and the image guiding space images, virtual-real combined navigation images are obtained;
the method for volume rendering the image-guided spatial image comprises the following steps:
Non-rigid registration of an intraoperative ultrasound image and a preoperative MR/CT image of a patient's body is performed, and an original image guiding space of the patient's body is acquired, the original image guiding space containing a real-time position of the patient's body;
Correcting the real-time position of the endoscope; calculating poles from the endoscopic images and the virtual endoscopic images obtained by shooting from two viewpoints by adopting a epipolar geometry method, compensating the rotation errors of the endoscopic images and the virtual endoscope by comparing the angle relations between the poles and the upper directions corresponding to the endoscopic images and the virtual endoscopic images, and obtaining the corrected position of the endoscope;
Based on the corrected position of the endoscope and the real-time position of the patient body, obtaining the relative position relation between the endoscope and the original image guiding space image;
the relative position relation expression of the endoscope and the original image guiding space is as follows;
wherein T pw is the transformation relation from the original image guiding space to the navigation system space, T wt is the transformation relation from the navigation system space to the endoscopic surgery space, For a fixed relative positional transformation of an optical tracker on a patient's body with the patient's body,/>For the fixed relative position conversion relation of the optical tracker on the endoscope and the endoscope,/>For the spatial position transformation relationship of an optical tracker and a navigation system on the body of a patient,/>The position transformation relation between the optical tracker on the endoscope and the space of the navigation system is that T pt is the relative position relation between the endoscope and the guiding space of the original image;
And performing volume rendering on the original image guiding space on the endoscope imaging plane based on the relative position relation to acquire the image guiding space image.
2. The mixed reality based endo-surgical navigation robot system of claim 1, wherein the endo-surgical robot module comprises a robot carrying base, a control cabinet, a robotic arm, a movable jaw, and an endoscope;
the robot bearing base is used for placing the mechanical arm;
the control cabinet is used for controlling the movement of the mechanical arm;
the mechanical arm is used for changing the spatial position and the gesture of the surgical instrument and is connected with the movable clamping jaw;
The movable clamping jaw is used for fixing the endoscope and transferring the endoscope to a designated position;
the endoscope includes an endoscope camera;
The endoscope camera is used for collecting an operation position image and obtaining an endoscope image.
3. The mixed reality based endo-surgical navigation robot system of claim 2, wherein,
The mixed reality operation navigation module comprises an optical tracker, a rigid support, an optical positioning instrument, a navigator host and a navigation image display;
the optical tracker is used for reflecting infrared rays;
the rigid support is used for placing the optical tracker on the patient body and the endoscope;
The optical positioning instrument is used for emitting infrared rays, receiving the infrared rays reflected by the optical tracker and acquiring real-time positions of the patient body and the endoscope;
the navigator host is used for volume drawing the image guiding space image on an endoscope imaging plane according to the real-time positions of the patient body and the endoscope, and based on the endoscope image and the image guiding space image, a virtual-real combined navigation image is obtained;
The navigation image display is used for displaying the integrated virtual-real combined navigation image.
4. The mixed reality based endo-surgical navigation robot system of claim 1, wherein,
Non-rigid registration of intra-operative ultrasound images with pre-operative MR/CT images is performed using a cross-modality image registration method based on a atlas neural network.
5. The mixed reality based endo-surgical navigation robot system of claim 3, wherein,
The method for obtaining the navigation image combining the virtual and the real comprises the following steps:
The endoscope image and the image guiding space image generated by volume drawing are linearly overlapped on an endoscope imaging plane according to a proportion formula,
The specific gravity formula g (x) is:
g(x)=(1-α)f0(x)+αf1(x)
Where f 0 (x) is a volume rendering result mapping function, f 1 (x) is an endoscopic image mapping function, and α is a specific gravity for controlling a volume rendering image and an endoscopic image.
CN202111641934.XA 2021-12-29 2021-12-29 Endoscope operation navigation robot system based on mixed reality Active CN114191078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111641934.XA CN114191078B (en) 2021-12-29 2021-12-29 Endoscope operation navigation robot system based on mixed reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111641934.XA CN114191078B (en) 2021-12-29 2021-12-29 Endoscope operation navigation robot system based on mixed reality

Publications (2)

Publication Number Publication Date
CN114191078A CN114191078A (en) 2022-03-18
CN114191078B true CN114191078B (en) 2024-04-26

Family

ID=80657251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111641934.XA Active CN114191078B (en) 2021-12-29 2021-12-29 Endoscope operation navigation robot system based on mixed reality

Country Status (1)

Country Link
CN (1) CN114191078B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115281584B (en) * 2022-06-30 2023-08-15 中国科学院自动化研究所 Flexible endoscope robot control system and flexible endoscope robot simulation method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000279425A (en) * 1999-03-30 2000-10-10 Olympus Optical Co Ltd Navigation device
WO2008002830A2 (en) * 2006-06-29 2008-01-03 Intuitive Surgical, Inc. Surgical tool position and identification indicator displayed in a soundary area of a computer display screen
WO2012033530A2 (en) * 2010-09-08 2012-03-15 University Of Houston Devices, systems and methods for multimodal biosensing and imaging
WO2012095755A1 (en) * 2011-01-13 2012-07-19 Koninklijke Philips Electronics N.V. Intraoperative camera calibration for endoscopic surgery
WO2014140813A1 (en) * 2013-03-11 2014-09-18 Fondation De Cooperation Scientifique Anatomical site relocalisation using dual data synchronisation
CN107456278A (en) * 2016-06-06 2017-12-12 北京理工大学 A kind of ESS air navigation aid and system
WO2018032083A1 (en) * 2016-08-17 2018-02-22 Synaptive Medical (Barbados) Inc. Methods and systems for registration of virtual space with real space in an augmented reality system
CN109512514A (en) * 2018-12-07 2019-03-26 陈玩君 A kind of mixed reality orthopaedics minimally invasive operation navigating system and application method
CN110368089A (en) * 2019-08-07 2019-10-25 湖南省华芯医疗器械有限公司 A kind of bronchial endoscope three-dimensional navigation method
WO2019213432A1 (en) * 2018-05-03 2019-11-07 Intuitive Surgical Operations, Inc. Systems and methods for measuring a distance using a stereoscopic endoscope
WO2021000664A1 (en) * 2019-07-03 2021-01-07 中国科学院自动化研究所 Method, system, and device for automatic calibration of differences in cross-modal target detection
CN112641514A (en) * 2020-12-17 2021-04-13 罗雄彪 Minimally invasive interventional navigation system and method
CN113222954A (en) * 2021-05-21 2021-08-06 大连海事大学 Multi-exposure image ghost-free fusion method based on patch alignment under global gradient
CN113520603A (en) * 2021-08-26 2021-10-22 复旦大学 Minimally invasive surgery robot system based on endoscope

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9289267B2 (en) * 2005-06-14 2016-03-22 Siemens Medical Solutions Usa, Inc. Method and apparatus for minimally invasive surgery using endoscopes
US7824328B2 (en) * 2006-09-18 2010-11-02 Stryker Corporation Method and apparatus for tracking a surgical instrument during surgery
US20170119474A1 (en) * 2015-10-28 2017-05-04 Endochoice, Inc. Device and Method for Tracking the Position of an Endoscope within a Patient's Body

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000279425A (en) * 1999-03-30 2000-10-10 Olympus Optical Co Ltd Navigation device
WO2008002830A2 (en) * 2006-06-29 2008-01-03 Intuitive Surgical, Inc. Surgical tool position and identification indicator displayed in a soundary area of a computer display screen
WO2012033530A2 (en) * 2010-09-08 2012-03-15 University Of Houston Devices, systems and methods for multimodal biosensing and imaging
WO2012095755A1 (en) * 2011-01-13 2012-07-19 Koninklijke Philips Electronics N.V. Intraoperative camera calibration for endoscopic surgery
WO2014140813A1 (en) * 2013-03-11 2014-09-18 Fondation De Cooperation Scientifique Anatomical site relocalisation using dual data synchronisation
WO2017211087A1 (en) * 2016-06-06 2017-12-14 北京理工大学 Endoscopic surgery navigation method and system
CN107456278A (en) * 2016-06-06 2017-12-12 北京理工大学 A kind of ESS air navigation aid and system
WO2018032083A1 (en) * 2016-08-17 2018-02-22 Synaptive Medical (Barbados) Inc. Methods and systems for registration of virtual space with real space in an augmented reality system
WO2019213432A1 (en) * 2018-05-03 2019-11-07 Intuitive Surgical Operations, Inc. Systems and methods for measuring a distance using a stereoscopic endoscope
CN109512514A (en) * 2018-12-07 2019-03-26 陈玩君 A kind of mixed reality orthopaedics minimally invasive operation navigating system and application method
WO2021000664A1 (en) * 2019-07-03 2021-01-07 中国科学院自动化研究所 Method, system, and device for automatic calibration of differences in cross-modal target detection
CN110368089A (en) * 2019-08-07 2019-10-25 湖南省华芯医疗器械有限公司 A kind of bronchial endoscope three-dimensional navigation method
CN112641514A (en) * 2020-12-17 2021-04-13 罗雄彪 Minimally invasive interventional navigation system and method
CN113222954A (en) * 2021-05-21 2021-08-06 大连海事大学 Multi-exposure image ghost-free fusion method based on patch alignment under global gradient
CN113520603A (en) * 2021-08-26 2021-10-22 复旦大学 Minimally invasive surgery robot system based on endoscope

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
神经外科手术导航系统及临床应用;张翠华;;中国医疗器械信息;20070120(01);全文 *
经输尿管镜钬激光治疗复杂输尿管结石的护理;覃凌峰;张翠华;;齐齐哈尔医学院学报;20070328(06);全文 *

Also Published As

Publication number Publication date
CN114191078A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
JP7429120B2 (en) Non-vascular percutaneous procedure system and method for holographic image guidance
Gavaghan et al. A portable image overlay projection device for computer-aided open liver surgery
Chen et al. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display
Shahidi et al. Implementation, calibration and accuracy testing of an image-enhanced endoscopy system
EP2433262B1 (en) Marker-free tracking registration and calibration for em-tracked endoscopic system
US9248000B2 (en) System for and method of visualizing an interior of body
US8073528B2 (en) Tool tracking systems, methods and computer products for image guided surgery
US8147503B2 (en) Methods of locating and tracking robotic instruments in robotic surgical systems
Mourgues et al. Flexible calibration of actuated stereoscopic endoscope for overlay in robot assisted surgery
US20090088897A1 (en) Methods and systems for robotic instrument tool tracking
WO2013141155A1 (en) Image completion system for in-image cutoff region, image processing device, and program therefor
Ma et al. Moving-tolerant augmented reality surgical navigation system using autostereoscopic three-dimensional image overlay
Ferguson et al. Toward image-guided partial nephrectomy with the da Vinci robot: exploring surface acquisition methods for intraoperative re-registration
GB2532004A (en) Hybrid navigation system for surgical interventions
WO2022150767A1 (en) Registration degradation correction for surgical navigation procedures
CN114191078B (en) Endoscope operation navigation robot system based on mixed reality
Ma et al. Knee arthroscopic navigation using virtual-vision rendering and self-positioning technology
CN111658142A (en) MR-based focus holographic navigation method and system
Fan et al. Three-dimensional image-guided techniques for minimally invasive surgery
Zhang et al. 3D augmented reality based orthopaedic interventions
WO2009027088A1 (en) Augmented visualization in two-dimensional images
Wengert et al. Endoscopic navigation for minimally invasive suturing
CN115089293A (en) Calibration method for spinal endoscopic surgical robot
Chen et al. Video-guided calibration of an augmented reality mobile C-arm
CN113470184A (en) Endoscope augmented reality error compensation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant