CN106236264B - Gastrointestinal surgery navigation method and system based on optical tracking and image matching - Google Patents

Gastrointestinal surgery navigation method and system based on optical tracking and image matching Download PDF

Info

Publication number
CN106236264B
CN106236264B CN201610717521.8A CN201610717521A CN106236264B CN 106236264 B CN106236264 B CN 106236264B CN 201610717521 A CN201610717521 A CN 201610717521A CN 106236264 B CN106236264 B CN 106236264B
Authority
CN
China
Prior art keywords
image
data
medical instrument
sub
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610717521.8A
Other languages
Chinese (zh)
Other versions
CN106236264A (en
Inventor
李国新
陈韬
蒋振刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610717521.8A priority Critical patent/CN106236264B/en
Publication of CN106236264A publication Critical patent/CN106236264A/en
Application granted granted Critical
Publication of CN106236264B publication Critical patent/CN106236264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00743Type of operation; Specification of treatment sites
    • A61B2017/00818Treatment of the gastro-intestinal system

Landscapes

  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a gastrointestinal surgery navigation method and a system based on optical tracking and image matching, wherein the gastrointestinal surgery navigation method comprises the following steps: acquiring scanned image data of a patient at least at a surgical site; the medical instrument is provided with a lens, and when the medical instrument enters the operation position, a real-time optical image at the front end of the lens is obtained through the lens; tracking the position data of the medical instrument lens by the tracking element, and matching the position data with the scanned image data to obtain a virtual image corresponding to the optical influence; and outputting the real-time optical image and the virtual image at the front end of the lens for real-time navigation of the operation. The invention can realize dynamic navigation in the operation process and provide real-time tracking effect for gastrointestinal operation of medical instruments.

Description

Gastrointestinal surgery navigation method and system based on optical tracking and image matching
Technical Field
The invention relates to a gastrointestinal surgery navigation method and system based on optical tracking and image matching.
Background
As technology and equipment mature, laparoscopes have become widely used in many fields, particularly in gastrointestinal surgery, however the operational characteristics of laparoscopes make them lose the delicate "touch" of traditional open surgery, making it extremely important to distinguish the "vision" of the anatomical site under the scope; the laparoscope mostly has 2D visual field and lacks depth sense, and although 3D lenses can be provided in the market, the laparoscope cannot be well popularized in China due to high cost; secondly, laparoscopy has its own inherent limitations: the visual field of an operator is narrowed from 160 degrees of an open operation to 70 degrees, the tubular visual field enables the operator not to effectively observe a plurality of organs and instruments in an abdominal cavity at the same time, the global grasping performance is greatly reduced, gastrointestinal operations, particularly gastric cancer operations, are based on operations guided by peripheral blood vessels, and the running and variation of the blood vessels have important significance on the operation strategy.
In recent years, computer-aided technologies, particularly three-dimensional reconstruction technologies, are increasingly applied to gastrointestinal surgery, and have a certain supplementary value for anatomical identification of laparoscopic surgery, at present, three-dimensional reconstruction technologies at home and abroad are mainly applied to surgical planning and surgical navigation, the surgical planning usually refers to simulation surgical drilling through a three-dimensional reconstruction model before surgery, and the surgical navigation refers to the purpose of guiding the surgery by observing the reconstructed anatomical model in the surgery. The guidance information is generally displayed in the form of a computer screen or a 3D printing model, but both forms belong to 'static' navigation.
Disclosure of Invention
The invention aims to provide a gastrointestinal surgery navigation method and system based on optical tracking and image matching. The invention can realize dynamic navigation in the operation process and provide real-time tracking effect for gastrointestinal operation.
The technical scheme is as follows:
a gastrointestinal surgery navigation method based on the combination of optical tracking and image matching,
the method comprises the following steps:
acquiring scanned image data of a patient at least at a surgical site;
the medical instrument is provided with a lens, and when the medical instrument enters the operation position, a real-time optical image at the front end of the lens is obtained through the lens; the tracking element tracks the position data of the medical instrument lens and matches the position data with the scanning image data to obtain a virtual image corresponding to the optical image;
and outputting the real-time optical image and the virtual image at the front end of the lens for real-time navigation of the operation.
Further, the aforementioned scanned image data includes: at least the image data of the operation position and the positioning data of the human body positioning base point;
and matching the position data with the positioning data, and displaying a virtual image corresponding to the positioning data.
Furthermore, the medical instrument is provided with a tracking mark point, and the tracking element acquires the position data of the lens of the medical instrument by tracking the tracking mark point.
Furthermore, the real-time optical image and the virtual image at the front end of the lens are fused to form a fused image, and the fused image is output outwards.
Furthermore, the scanned image data comprises at least two sub-scene data of the sub-scenes, and when the medical instrument enters the operation area, the position data of the medical instrument corresponds to the positioning data of each sub-scene data, and the corresponding virtual image of the sub-scene data is displayed.
Further, the part scenes comprise: central, right inferior, left inferior, right superior, hepatogastric;
the scene data is as follows: the left lower region data, the right upper region data, the central region data and the liver and stomach region data correspond to the position data of the medical instrument lens and the positioning data in the left lower region data, the right upper region data, the central region data or the liver and stomach region data in the previous step, and a virtual image corresponding to the position data is displayed.
Furthermore, in the above steps, the medical device sequentially enters the left lower region, the right upper region, the central region and the hepatogastric region, and displays a virtual image corresponding thereto.
Furthermore, the scanned image data includes sub data, the sub data includes sub virtual images, in the foregoing step, the feature points in the optical image are tracked, and when the feature points of a partial region in the optical image are turned or moved, the sub virtual image of the region is called, and the sub virtual image is superimposed on the virtual image and output.
Further, in the foregoing step, after acquiring a real-time optical image at the front end of the lens, extracting feature points in the optical image, and identifying and removing an erroneous feature point according to correspondence between the feature points and feature points in the virtual image; or calculating the deviation value of the optical image and the virtual image, and correcting the virtual image according to the deviation value.
Based on a medical instrument gastrointestinal surgery navigation system combining optical tracking and image matching,
the system comprises:
a medical instrument having a lens for entering the surgical field and acquiring an optical image of the surgical field;
a tracking element for tracking position data of a medical instrument lens;
a storage unit for storing scanned image data acquired in advance;
the matching unit is used for matching the position data of the lens with the scanned image data and obtaining a virtual image corresponding to the position data;
and the output unit is used for outputting at least part of the real-time optical image and the virtual image at the front end of the lens outwards.
The following illustrates the advantages or principles of the invention:
1. the navigation method needs to acquire the scanning image data of a patient at least at a surgical site in advance; the medical instrument acquires the optical image at the front end of the lens during the operation, the tracking element can track the position of the medical instrument, the position data of the lens is matched with the scanned image data, a virtual image corresponding to the optical image is obtained, and the real-time dynamic navigation of the operation is realized.
2. Scanning image data acquired in advance by scanning comprises the image data and corresponding positioning data; the pre-scanning can be completed by CT scanning or other modes, positioning base points can be arranged on the patient during scanning, and certain characteristics of the patient body can also be used as the positioning base points to accurately position the image data so as to realize accurate matching with the optical image.
3. In order to facilitate medical staff to observe the effect of the operation, the real-time optical image at the front end of the lens is fused to form a fused image, and the fused image is output outwards.
4. In the operation process, as the structure of the abdominal viscera is complex, the position is not fixed and the viscera is easy to deform, in order to improve the navigation precision, the operation scene is divided into a plurality of sub-scenes, the matching precision is further improved, and the high-precision navigation is realized.
5. For gastrointestinal surgery, the following five sub-scenarios are employed: the central area, the right lower area, the left lower area, the right upper area and the liver and stomach area are distinguished through the five sub-scenes, and the actual requirements in the operation process can be better met.
6. The medical instrument is tracked through the tracking element, the surgical scene is matched with the scanned image data, but the precision of the surgical scene is still to be improved, at the moment, the feature points in the optical image at the front end of the lens can be extracted and correspond to the feature points in the virtual image, and then correction is carried out according to the deviation value between the feature points and the virtual image, so that the effect of precision matching is achieved.
7. Tracking the characteristic points in the optical image, calling the sub-virtual image of the area when the characteristic points of the partial area in the optical image turn over or move, and superposing the sub-virtual image to the virtual image for output; therefore, the method can adapt to the position change of the visceral organs and the blood vessels in the operation process and realize real-time and accurate navigation.
8. When the optical image is matched with the virtual image, the wrong characteristic points are removed or the deviation is corrected according to the deviation, so that the accurate fusion effect is achieved.
Drawings
FIG. 1 is a flow chart of a method of navigating a gastrointestinal procedure according to an embodiment of the present invention.
Detailed Description
The following provides a detailed description of embodiments of the invention.
As shown in fig. 1, the navigation method of laparoscopic gastrointestinal surgery based on optical tracking and image matching includes the following steps:
medical staff acquires scanning image data of a patient at an operation position through CT scanning in advance (the scanning image data comprises at least image data at the operation position and positioning data of a human body positioning base point); storing the scanned image data in a storage unit, importing the scanned image data into a processor of the navigation system before operation, and reconstructing a three-dimensional virtual model;
in the operation process, a medical instrument (in the embodiment, the medical instrument is a laparoscope, and can also be an operation knife head with a lens or other instruments) enters an operation position, and the lens carries out video acquisition to obtain a real-time optical image of the front end of the medical instrument;
tracking the position of the medical instrument lens through a tracking element, and performing coarse matching based on the position;
then, realizing accurate matching through scene segmentation and matching of image characteristics;
and fusing the optical image and the virtual model together through scene fusion and outputting the fused optical image and the virtual model outwards.
Now, the following is described in detail:
the medical instrument is provided with a lens, when the medical instrument enters the operation position, the position data of the lens of the medical instrument tracked by the medical instrument tracking element is obtained through the lens, and the position data is matched with the positioning data in the scanned image data to obtain a virtual image corresponding to the optical image; and outputting the real-time optical image and the virtual image at the front end of the lens for real-time navigation of the operation.
The medical instrument is provided with tracking mark points, and the tracking element acquires position data of a lens of the medical instrument by tracking the tracking mark points (at the moment, the lens of the medical instrument needs to be calibrated firstly, the method comprises the following steps of adopting a 12X9 square chart as a plane calibration template, enabling the side length of each square to be 20mm, enabling the resolution of an output plane of the lens to be 1280X720, shooting 10 images of the square chart from different angles by ①, detecting all corner points of the square chart by ②, solving internal parameters such as focal length and coordinate center of the camera by using the relation between space points and corresponding image points of the square chart, solving distortion parameters of the camera by ③, and recovering an undistorted image by using finally solved radial distortion parameters by ④).
And fusing the real-time optical image and the virtual image at the front end of the lens to form a real-time fused image, and outputting the fused image outwards.
When scanning image data of a patient are obtained through CT scanning, the scanning image data comprise a plurality of scene data of different scenes; when the medical instrument enters the operation area, the position data of the medical instrument corresponds to the positioning data of each sub-scene data, and a virtual image of the corresponding sub-scene data is obtained; the method comprises the following specific steps:
the scanning image data is divided into five sub-scenes, namely, between a left lower region (around a left blood vessel of a gastric omentum), a right lower region (under a pyloric portal), a right upper region (above a pyloric portal and a hepatoduodenal ligament), a central region (an abdominal artery and branches thereof) and a liver and stomach region, when a medical instrument enters an operation region, the position data of the medical instrument corresponds to the positioning data of each sub-scene data, and the corresponding virtual image of the sub-scene data is displayed. The method reduces the error of registration and ensures the accuracy of the operation.
The medical apparatus enters five sub-scenes, namely a left lower region, a right upper region, a central region and a liver and stomach region.
When the medical instrument enters each sub-scene for the first time, the medical instrument pauses and matches the optical image at the moment with the virtual image, enters the corresponding sub-scene, and displays the virtual image of the corresponding sub-scene data.
During navigation, after a real-time optical image at the front end of a lens is acquired, extracting feature points in the optical image, identifying wrong feature points according to correspondence of the feature points and the feature points in the virtual image, and removing the wrong feature points; or calculating the deviation value of the optical image and the virtual image, and rectifying the virtual image according to the deviation value so as to achieve the effect of accurate positioning. The method comprises the following steps: the image registration method based on the binocular vision image feature points is adopted to realize the matching of the three-dimensional model in the binocular vision image of the medical instrument. The method comprises the steps of firstly obtaining a virtual medical instrument image at a corresponding position according to a three-dimensional model and the position of a medical instrument, detecting feature points of a virtual image and an optical image by using a Harris feature extraction algorithm, then filtering out mismatching points aiming at wrong registration points according to feature cross-correlation and organizational structure invariance, and finally obtaining a registered image by using TPS transformation after obtaining correctly registered feature points. The registration algorithm based on the multi-scale Harris corner SAM is a new algorithm, and the implementation steps are as follows: firstly, extracting image edge information by utilizing wavelet multi-scale product; then, introducing a multi-scale Harris corner detection operator to extract image edge information; then, the best matching point pair is determined by estimating transformation parameters and defining a similarity measure function, and finally, the transformation parameters are solved by using least square. The advantages are that: 1. the precision reading and the speed of the registration are high; 2. the interference of noise on the extraction of the characteristic points can be eliminated by utilizing wavelet multi-scale product edge detection; 3. and introducing the scale space representation of the corner points to realize the registration among the multi-resolution images.
In the matching process, the problem of virtual and real shielding needs to be solved, and the solution method adopts two aspects of off-line processing and on-line processing: 1. in the off-line processing process, a left image and a right image are shot firstly, the depth value of each pixel point in a scene is calculated, then the depth value is improved so as to extract a relatively rough shielding edge, the pixel value of each pixel point in an HSV color space in the scene is calculated at the same time, and a clearer contour is obtained by adopting image enhancement processing such as sharpening; and then, a fusion means is utilized to obtain a higher-precision shielding edge by combining the rough shielding edge and the outline information. 2. In the online processing process, firstly, the characteristic points are tracked, the displacement of the target contour is calculated according to the displacement of the characteristic points to obtain an approximate contour, then, the accurate contour of the target object is solved in a strip-shaped area taking the approximate contour as the center, and finally, a virtual-real composite image with a correct shielding relation is obtained by utilizing a redrawing technology. And continuing to obtain the next frame image, taking the target contour obtained from the current frame image as the initial contour of the next frame, and repeating the steps.
The abdominal viscera is not fixed, has large mobility and is easy to deform, so the difficulty is in handling the deformation problem; the solution is as follows: after CT scanning is carried out on a patient in advance and a three-dimensional virtual model is reconstructed, sub data can be reconstructed through processing, the sub data comprise sub virtual images (such as virtual images when partial blood vessels are in a vertical state), feature points in an optical image are tracked in the operation process, when the feature points of partial regions in the optical image are turned over or moved, the sub virtual images (such as virtual images when partial blood vessels are in the vertical state) of the regions are called, and the sub virtual images are superposed on the virtual images to be output.
The embodiment has the following advantages:
1. the navigation method needs to acquire the scanning image data of a patient at least at a surgical site in advance; the medical instrument acquires the optical image at the front end of the lens during the operation, the tracking element can track the position of the medical instrument, the position data of the lens is matched with the scanned image, a virtual image corresponding to the optical image is obtained, and the real-time dynamic navigation of the operation is realized.
2. Scanning image data acquired in advance by scanning comprises the image data and corresponding positioning data; the pre-scanning can be completed by CT scanning or other modes, positioning base points can be arranged on the patient during scanning, and certain characteristics of the patient body can also be used as the positioning base points to accurately position the image data so as to realize accurate matching with the optical image.
3. In order to facilitate the observation of the operation effect by medical staff, the real-time optical image and the virtual image at the front end of the lens are fused to form a fused image, and the fused image is output outwards.
4. In the operation process, due to the fact that the structure of abdominal organs is complex, the positions of abdominal organs are not fixed, deformation is easy to occur, registration errors can exist between the optical image at the front end of the lens and the reconstructed virtual image, and in order to improve the registration accuracy, the operation scene is divided into a plurality of sub-scenes.
5. For gastrointestinal surgery, the following five sub-scenarios are employed: the central area, the right lower area, the left lower area, the right upper area and the liver and stomach area are distinguished through the five sub-scenes, and the actual requirements in the operation process can be better met.
6. The medical instrument is tracked through the tracking element, the surgical scene is matched with the scanned image data, but the precision of the surgical scene is still to be improved, at the moment, the feature points in the optical image at the front end of the lens can be extracted and correspond to the feature points in the virtual image, and then correction is carried out according to the deviation value between the feature points and the virtual image, so that the effect of precision matching is achieved.
7. Tracking the characteristic points in the optical image, calling the sub-virtual image of the area when the characteristic points of the partial area in the optical image turn over or move, and superposing the sub-virtual image to the virtual image for output; therefore, the method can adapt to the position change of the visceral organs and the blood vessels in the operation process and realize real-time and accurate navigation.
8. When the optical image is matched with the virtual image, the wrong characteristic points are removed or the deviation is corrected according to the deviation, so that the accurate fusion effect is achieved.
The above are merely specific embodiments of the present invention, and the scope of the present invention is not limited thereby; any alterations and modifications without departing from the spirit of the invention are within the scope of the invention.

Claims (1)

1. A medical instrument gastrointestinal surgery navigation system based on combination of optical tracking and image matching is characterized in that,
the system comprises:
a medical instrument for accessing a surgical field for performing a surgical procedure;
a tracking element for tracking position data of the medical instrument;
a storage unit configured to store previously acquired scan image data including at least sub-scene data of at least two sub-scenes;
the matching unit is used for matching the position data of the medical instrument with the corresponding sub-scene data and obtaining a virtual image corresponding to the sub-scene;
the output unit is used for outputting at least part of the virtual images corresponding to the corresponding sub-scenes outwards;
the medical instrument is provided with a lens, and when the medical instrument enters the operation position, a real-time optical image at the front end of the lens is obtained through the lens;
the tracking element tracks the position data of the medical instrument lens and matches the position data with the scanning image data to obtain a virtual image corresponding to the optical image;
outputting the real-time optical image and the virtual image at the front end of the medical instrument for real-time navigation of the operation;
the sub-scenes include: central, right inferior, left inferior, right superior, hepatogastric;
at least two of the sub-scene data are: in the step, the position data of the lens of the medical instrument corresponds to positioning data in the left lower area data, the right upper area data, the central area data or the liver and stomach area data, and a virtual image corresponding to the position data is displayed;
the scanned image data also comprises subdata, wherein the subdata comprises a sub-virtual image, characteristic points in the optical image are tracked, when the characteristic points of partial areas in the optical image turn over or move, the sub-virtual image of the area is called, and the sub-virtual image is superposed to the virtual image to be output;
after the medical instrument acquires a real-time optical image at the front end of the lens, the matching unit extracts feature points in the optical image, identifies wrong feature points according to correspondence of the feature points and the feature points in the virtual image, and removes the wrong feature points; or calculating a deviation value of the optical image and the virtual image, and correcting the virtual image according to the deviation value;
identifying a wrong feature point and removing the wrong feature point, specifically comprising: firstly, obtaining a virtual medical instrument image at a corresponding position according to the three-dimensional model and the position of the medical instrument; detecting the characteristic points of the virtual image and the optical image by using a Harris characteristic extraction algorithm; and then, aiming at the wrong registration point, filtering out the mismatch point according to characteristic cross correlation and tissue structure invariance, and finally obtaining a registered image by using TPS transformation after obtaining the correctly registered characteristic point.
CN201610717521.8A 2016-08-24 2016-08-24 Gastrointestinal surgery navigation method and system based on optical tracking and image matching Active CN106236264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610717521.8A CN106236264B (en) 2016-08-24 2016-08-24 Gastrointestinal surgery navigation method and system based on optical tracking and image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610717521.8A CN106236264B (en) 2016-08-24 2016-08-24 Gastrointestinal surgery navigation method and system based on optical tracking and image matching

Publications (2)

Publication Number Publication Date
CN106236264A CN106236264A (en) 2016-12-21
CN106236264B true CN106236264B (en) 2020-05-08

Family

ID=57594757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610717521.8A Active CN106236264B (en) 2016-08-24 2016-08-24 Gastrointestinal surgery navigation method and system based on optical tracking and image matching

Country Status (1)

Country Link
CN (1) CN106236264B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680103A (en) * 2017-09-12 2018-02-09 南方医科大学南方医院 The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically
CN107704661A (en) * 2017-09-13 2018-02-16 南方医科大学南方医院 Construction method for the mixed finite element deformation model of stomach cancer endoscope-assistant surgery real-time navigation system
CN109223177A (en) * 2018-07-30 2019-01-18 艾瑞迈迪医疗科技(北京)有限公司 Image display method, device, computer equipment and storage medium
CN110478039A (en) * 2019-07-24 2019-11-22 常州锦瑟医疗信息科技有限公司 A kind of medical equipment tracking system based on mixed reality technology
CN111658141B (en) * 2020-05-07 2023-07-25 南方医科大学南方医院 Gastrectomy port position navigation system, gastrectomy port position navigation device and storage medium
CN113786239B (en) * 2021-08-26 2023-08-01 哈尔滨工业大学(深圳) Method and system for tracking and real-time early warning of surgical instruments under stomach and digestive tract

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6892090B2 (en) * 2002-08-19 2005-05-10 Surgical Navigation Technologies, Inc. Method and apparatus for virtual endoscopy
EP1685535B1 (en) * 2003-08-21 2014-04-30 Philips Intellectual Property & Standards GmbH Device and method for combining two images
CN101375805A (en) * 2007-12-29 2009-03-04 清华大学深圳研究生院 Method and system for guiding operation of electronic endoscope by auxiliary computer
JP5535725B2 (en) * 2010-03-31 2014-07-02 富士フイルム株式会社 Endoscope observation support system, endoscope observation support device, operation method thereof, and program
US20130303887A1 (en) * 2010-08-20 2013-11-14 Veran Medical Technologies, Inc. Apparatus and method for four dimensional soft tissue navigation
CN103371870B (en) * 2013-07-16 2015-07-29 深圳先进技术研究院 A kind of surgical navigation systems based on multimode images
CN103489178A (en) * 2013-08-12 2014-01-01 中国科学院电子学研究所 Method and system for image registration
CN103530872B (en) * 2013-09-18 2016-03-30 北京理工大学 A kind of error hiding delet method based on angle restriction
CN103948432A (en) * 2014-04-30 2014-07-30 深圳先进技术研究院 Algorithm for augmented reality of three-dimensional endoscopic video and ultrasound image during operation

Also Published As

Publication number Publication date
CN106236264A (en) 2016-12-21

Similar Documents

Publication Publication Date Title
CN106236264B (en) Gastrointestinal surgery navigation method and system based on optical tracking and image matching
US11717376B2 (en) System and method for dynamic validation, correction of registration misalignment for surgical navigation between the real and virtual images
EP2996557B1 (en) Anatomical site relocalisation using dual data synchronisation
CN110033465B (en) Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image
WO2017211087A1 (en) Endoscopic surgery navigation method and system
CA3064678A1 (en) Methods for using radial endobronchial ultrasound probes for three-dimensional reconstruction of images and improved target localization
AU2007221876A1 (en) Registration of images of an organ using anatomical features outside the organ
JP3910239B2 (en) Medical image synthesizer
Qiu et al. Endoscope navigation and 3d reconstruction of oral cavity by visual slam with mitigated data scarcity
Su et al. Comparison of 3d surgical tool segmentation procedures with robot kinematics prior
Merritt et al. Real-time CT-video registration for continuous endoscopic guidance
CN115311405A (en) Three-dimensional reconstruction method of binocular endoscope
CN116883471A (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN115222878A (en) Scene reconstruction method applied to lung bronchoscope surgical robot
Richey et al. Soft tissue monitoring of the surgical field: detection and tracking of breast surface deformations
Bernhardt et al. Automatic detection of endoscope in intraoperative ct image: Application to ar guidance in laparoscopic surgery
JP2023520618A (en) Method and system for using multi-view pose estimation
JP2022517807A (en) Systems and methods for medical navigation
Wang et al. Stereo video analysis for instrument tracking in image-guided surgery
CN115462903A (en) Human body internal and external sensor cooperative positioning system based on magnetic navigation
CN106236263A (en) The gastrointestinal procedures air navigation aid decomposed based on scene and system
Wang et al. Dynamic 3D reconstruction of gastric internal surface under gastroscopy
Wang et al. Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy
CN111743628A (en) Automatic puncture mechanical arm path planning method based on computer vision
Fuertes et al. Augmented reality system for keyhole surgery-performance and accuracy validation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant