CN115245303A - Image fusion system and method for endoscope three-dimensional navigation - Google Patents
Image fusion system and method for endoscope three-dimensional navigation Download PDFInfo
- Publication number
- CN115245303A CN115245303A CN202110448272.8A CN202110448272A CN115245303A CN 115245303 A CN115245303 A CN 115245303A CN 202110448272 A CN202110448272 A CN 202110448272A CN 115245303 A CN115245303 A CN 115245303A
- Authority
- CN
- China
- Prior art keywords
- endoscope
- coordinate system
- image
- dimensional
- optical positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 12
- 238000000034 method Methods 0.000 title abstract description 8
- 230000003287 optical effect Effects 0.000 claims abstract description 97
- 230000009466 transformation Effects 0.000 claims abstract description 36
- 230000000007 visual effect Effects 0.000 claims abstract description 22
- 238000001839 endoscopy Methods 0.000 claims abstract description 8
- 238000012805 post-processing Methods 0.000 claims abstract description 4
- 239000000284 extract Substances 0.000 claims description 8
- 239000003550 marker Substances 0.000 claims description 7
- 238000007500 overflow downdraw method Methods 0.000 claims description 7
- 230000008520 organization Effects 0.000 claims description 6
- 238000007689 inspection Methods 0.000 claims description 4
- 239000011324 bead Substances 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims 2
- 238000003745 diagnosis Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 4
- 210000004204 blood vessel Anatomy 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000003902 lesion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 210000002784 stomach Anatomy 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 208000025865 Ulcer Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 238000011369 optimal treatment Methods 0.000 description 1
- 239000008188 pellet Substances 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 231100000397 ulcer Toxicity 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Signal Processing (AREA)
- Endoscopes (AREA)
Abstract
The invention discloses an image fusion system and method for endoscope three-dimensional navigation. The system comprises a graphic workstation, an optical positioning mark, an optical positioning system and an endoscope system, wherein the graphic workstation is used for importing a three-dimensional image before examination, reconstructing and post-processing the three-dimensional image, extracting image point cloud data and optical point cloud data of a focus and a surrounding tissue structure of the focus, matching the image point cloud data and the optical point cloud data based on a point cloud registration algorithm, and calculating spatial transformation from a coordinate system of the optical positioning system to a coordinate system of the three-dimensional image before examination; calculating the space transformation from an endoscope viewing angle coordinate system to a three-dimensional image coordinate system before examination; and displaying the current visual field of the three-dimensional image corresponding to the endoscope image in real time based on the real-time space position of the endoscope, and overlapping and fusing the current visual field with the endoscope image for display. By utilizing the invention, the two-dimensional image of the endoscope field of vision is combined with the three-dimensional image before examination, and abundant image information is provided for the doctor to carry out the endoscopy for assisting the diagnosis.
Description
Technical Field
The invention relates to the technical field of endoscope images, in particular to an image fusion system and method for endoscope three-dimensional navigation.
Background
Three-dimensional image data such as CT/MR provides a patient with abundant anatomical information including the spatial position of the lesion and its surrounding tissue structures such as blood vessels. During the medical image guided interventional therapy, a doctor determines the relative spatial position relationship of a focus and a surrounding tissue blood vessel by referring to a three-dimensional image of a patient, and avoids injuring important anatomical tissue structures of the patient during the examination and treatment process.
The endoscope is a detection instrument integrating traditional optics, ergonomics, precision machinery, modern electronics, mathematics and software, and can enter the stomach through the oral cavity or enter the body through other natural pores. Thus, it is possible to see lesions which cannot be visualized by X-rays, and with the aid of an endoscope, the physician can observe, for example, ulcers or tumors in the stomach, and thus prescribe an optimal treatment for the patient.
However, the endoscope has a limited visual field range, so that the overall view of the focus is difficult to see, and the spatial relationship between the focus and the blood vessels of the surrounding tissues cannot be accurately distinguished.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides an image fusion system for endoscope three-dimensional navigation, which can fuse and display three-dimensional images such as CT/MR and the like before examination and current visual field images of an endoscope in real time. The endoscope is provided with an optical positioning mark to position the spatial position of the endoscope in real time, a doctor uses the endoscope to scan and check a focus and a surrounding tissue structure thereof in multiple angles, software uses multi-frame endoscope images to reconstruct space point clouds of the focus and the surrounding tissue structure, and the software is registered with point clouds segmented and reconstructed in advance in three-dimensional images such as CT/MR and the like to complete the registration of a human body space and a preoperative three-dimensional image space, and then the software provides a three-dimensional image in the visual field of the current endoscope and displays the three-dimensional image and the endoscope image in an overlapping mode through moving the endoscope in real time.
The embodiment of the invention provides an image fusion system for endoscope three-dimensional navigation, which comprises a graphic workstation, an optical positioning mark, an optical positioning system and an endoscope system, wherein the graphic workstation is used for importing a three-dimensional image before examination, reconstructing and post-processing the three-dimensional image, extracting image point cloud data of a focus and a surrounding tissue structure of the focus, extracting optical point cloud data of the focus and the surrounding tissue structure of the focus in an optical positioning space, matching the image point cloud data and the optical point cloud data based on a point cloud registration algorithm, and calculating space transformation from a coordinate system of the optical positioning system to a coordinate system of the three-dimensional image before examination; calculating the space transformation from an endoscope viewing angle coordinate system to a three-dimensional image coordinate system before examination; displaying the current visual field of the three-dimensional image corresponding to the endoscope image in real time based on the real-time space position of the endoscope, and overlapping and fusing the current visual field and the endoscope image for display; the optical positioning mark is arranged on the endoscope; the optical positioning system is used for tracking the spatial position of the optical positioning mark in real time; endoscopic systems are used for endoscopy.
The embodiment of the invention also provides an image fusion method for endoscope three-dimensional navigation, which comprises the following steps: importing the three-dimensional image before examination into a graphic workstation; the image workstation divides the focus and the surrounding organization structure thereof, outputs a two-dimensional image of the division result, converts the two-dimensional image into a triangular patch only with a contour, and further extracts the image point cloud data of the focus and the surrounding organization structure thereof; installing an optical positioning mark on the endoscope, and arranging an optical positioning system for tracking the spatial position of the optical positioning mark in real time; capturing multiframe two-dimensional visual field images of a focus and surrounding tissue structures thereof by using an endoscope; based on the multi-frame two-dimensional view images, the graphic workstation reconstructs the multi-frame two-dimensional view images into three-dimensional point cloud data of an optical positioning space associated with the optical positioning mark, and extracts optical point cloud data of the focus and surrounding tissue structures in the optical positioning space; the graphic workstation matches the image point cloud data and the optical point cloud data based on a point cloud registration algorithm, and calculates the space transformation from the optical positioning system coordinate system to a three-dimensional image coordinate system before inspection; the graphic workstation calculates the spatial transformation from the endoscope viewing angle coordinate system to the three-dimensional image coordinate system before examination; and moving the endoscope, and displaying the current visual field of the three-dimensional image corresponding to the endoscope image in real time by the graphic workstation based on the real-time space position of the endoscope, and overlapping and fusing the current visual field with the endoscope image for display.
In some embodiments, the optical locators are marked as reflective pellets or black and white checkerboards for optical locating.
In some embodiments, the spatial transformation of the endoscope view coordinate system to the pre-examination three-dimensional image coordinate system is derived based on the spatial transformation of the endoscope view coordinate system to the optical positioning marker coordinate system, the spatial transformation of the optical positioning marker coordinate system to the optical positioning system coordinate system, and the spatial transformation of the optical positioning system coordinate system to the pre-examination three-dimensional image coordinate system.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
1. combining the two-dimensional image of the endoscope field with the three-dimensional image before examination, and providing abundant image information for assisting diagnosis for a doctor to perform endoscopy;
2. the three-dimensional image before examination can provide a clear and complete 3D map of the focus and the surrounding tissue structure, so that a doctor can conveniently observe the relative position relationship of the focus and the surrounding tissue structure in a three-dimensional space in real time and at multiple angles when performing endoscopy;
3. the registration of the human body space and the preoperative three-dimensional image space is automatically completed based on a point cloud registration algorithm, and the manual registration time is saved.
Drawings
Fig. 1 is a block diagram of an image fusion system for endoscopic three-dimensional navigation according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of the spatial transformation of each coordinate system.
Fig. 3 is a flowchart of a method for reconstructing a three-dimensional scene based on endoscopic images according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of the endoscope image and the current-field three-dimensional image being displayed in a superimposed manner.
Detailed Description
The following detailed description of specific embodiments of the present invention is provided in connection with the accompanying drawings, which are not intended to limit the invention. These and other characteristics of the invention will become apparent from the description of a preferred form of embodiment, given as a non-limiting example, with reference to the attached drawings.
The embodiment of the invention provides an image fusion system for endoscope three-dimensional navigation.
Fig. 1 is a block diagram of an image fusion system for endoscopic three-dimensional navigation according to an embodiment of the present invention.
The image fusion system for endoscope three-dimensional navigation comprises a graphic workstation, an optical positioning mark, an optical positioning system and an endoscope system.
The graphic workstation is used for importing a three-dimensional image before examination, reconstructing and post-processing the three-dimensional image before examination, extracting image point cloud data of a focus and a surrounding tissue structure of the focus, extracting optical point cloud data of the focus and the surrounding tissue structure of the focus in an optical positioning space, matching the image point cloud data and the optical point cloud data based on a point cloud registration algorithm, and calculating spatial transformation from a coordinate system of the optical positioning system to a coordinate system of the three-dimensional image before examination; calculating the space transformation from an endoscope viewing angle coordinate system to a three-dimensional image coordinate system before examination; and displaying the current visual field of the three-dimensional image corresponding to the endoscope image in real time based on the real-time space position of the endoscope, and overlapping and fusing the current visual field with the endoscope image for display.
The optical positioning mark is arranged on the endoscope, for example, a reflective ball or a black and white checkerboard can be arranged at the end of the ureteroscope for optical positioning.
The optical locating system is used for tracking the spatial position of the optical locating mark in real time. The spatial position of optical positioning markers, such as retro-reflective beads or black and white checkerboards, mounted on an endoscope can be tracked in real time based on the binocular positioning principle, for example, using a binocular camera.
Endoscopic systems are used for routine endoscopy. The endoscope system is conventional and will not be described in detail.
Fig. 3 is a flowchart of an image fusion method for endoscopic three-dimensional navigation according to an embodiment of the present invention. The image fusion method for endoscope three-dimensional navigation comprises the following steps;
step S100, importing a three-dimensional image before examination.
In this step, three-dimensional image data before the patient examination, such as CT images, MR images, three-dimensional ultrasound images, PET/CT images, etc., is imported into the graphics workstation.
Step S200, the graphic workstation can automatically segment the focus and the surrounding organization structure based on algorithms such as threshold segmentation, active contour, sparse field level set, deep learning and the like, output a two-dimensional image of a segmentation result, convert the two-dimensional image into a triangular surface patch only with contour by using algorithms such as isosurface extraction and the like, and further extract image point cloud data of the focus and the surrounding organization structure;
in the step, the graphic workstation segments the focus and surrounding tissue structure, and extracts image point cloud data D I 。
Step S300, an optical positioning mark is installed on the endoscope, and an optical positioning system is arranged for tracking the space position of the optical positioning mark in real time.
The optical positioning mark can be a small reflective ball coated with special materials on the surface, or a traditional planar black-and-white checkerboard, the optical positioning mark can be arranged at the tail end of the endoscope, the relative spatial position relation between the optical positioning mark and the endoscope in the endoscopic process needs to be kept unchanged, and the optical positioning mark can be tracked and positioned by the optical positioning system all the time;
step S400, a plurality of frames of two-dimensional visual field images of the focus and the surrounding tissue structure are captured by using the endoscope.
In this step, the physician can use the endoscope to scan the focus and the surrounding tissue structure in multiple angles, which is conventional and thus not described in detail.
Step S500, based on the multi-frame two-dimensional view images, the graphics workstation reconstructs the multi-frame two-dimensional view images into three-dimensional point cloud data of an optical positioning space associated with the optical positioning markers, and extracts optical point cloud data of the focus and surrounding tissue structures in the optical positioning space.
Based on multi-frame two-dimensional view images, the graphic workstation can automatically reconstruct the multi-frame two-dimensional view images into three-dimensional point cloud data of an optical positioning space based on a weighted iterative feature algorithm and the like, automatically segment focuses and surrounding tissue structures thereof in the whole point cloud data based on prior information such as shape contour constraint and the like, and extract the focuses and surrounding tissue structures thereofOptical point cloud data D of surrounding tissue structure in optical positioning space O ;
The difference between the optical point cloud data and the image point cloud data is that the latter is generated based on a three-dimensional image before examination, namely coordinate position information of a focus and a surrounding tissue structure in the three-dimensional image space before examination, and the former is generated by a multi-frame endoscope two-dimensional visual field image in an optical positioning space, namely the coordinate position information of the focus and the surrounding tissue structure in the optical positioning space.
S600, automatically matching image point cloud data and optical point cloud data by a graphic workstation based on a point cloud registration algorithm such as sampling consistency initial registration and iteration closest point, and calculating the space transformation from an optical positioning system coordinate system to a three-dimensional image coordinate system before inspection;
in this step, the image point cloud data D is automatically completed based on a point cloud registration algorithm I And optical point cloud data D O The registration of the optical positioning system and the three-dimensional image space is realized, and the space transformation from the optical positioning system coordinate system to the three-dimensional image coordinate system before examination is calculated
Where R is a rotation matrix of dimension 3x3, representing the pose transformation between two coordinate systems, being an orthonormal matrix of units, and t is a translation vector of dimension 3x1, representing the translation transformation between two coordinate systems. Optically locating a point (x) in space o ,y o ,z o ) T Its homogeneous coordinate (x) o ,y o ,z o ,1) T By left-hand spatial transformationThe coordinates (x) in the three-dimensional image space can be obtained I ,y I ,z I ,1) T 。
In step S700, the graphics workstation calculates a spatial transformation from the endoscope view coordinate system to the pre-examination three-dimensional image coordinate system.
The spatial transformation according to the present system is shown in fig. 2, and symbol T denotes a homogeneous matrix of the spatial transformation. Wherein,representing the spatial transformation of the endoscope's view coordinate system to the optical positioning marker coordinate system, obtained by mechanically designing the dimensions, is known;representing the spatial transformation from the optical positioning mark coordinate system to the optical positioning system coordinate system, obtained from the positioning information of the optical positioning system, as known;representing the spatial transformation of the optical positioning system coordinate system to the pre-inspection three-dimensional image coordinate system, obtained by step S600; spatial transformation of the endoscope's view coordinate system to the pre-examination three-dimensional image coordinate systemThe calculation formula of (2) is as follows:
and 800, moving the endoscope, and displaying the current visual field of the three-dimensional image corresponding to the endoscope image in real time by the graphic workstation based on the real-time space position of the endoscope, and overlapping and fusing the current visual field with the endoscope image for display.
In this step, the endoscope is moved, and the graphics workstation displays in real time the current field of view of the three-dimensional image corresponding to the endoscopic image, superimposed and fused with the endoscopic image.
Thus, by using the system and the method of the invention, the two-dimensional image of the endoscope field of vision and the three-dimensional image before examination can be combined, and abundant image information is provided for the doctor to perform the endoscopy for assisting the diagnosis; the three-dimensional image before examination can provide a clear and complete 3D map of the focus and the surrounding tissue structure, so that a doctor can conveniently observe the relative position relationship of the focus and the surrounding tissue structure in a three-dimensional space in real time and at multiple angles when performing endoscopy; the registration of the human body space and the preoperative three-dimensional image space is automatically completed based on a point cloud registration algorithm, and the manual registration time is saved.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be utilized by those of ordinary skill in the art upon reading the foregoing description. In addition, in the above-described embodiments, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a non-claimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that the embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and the scope of the present invention is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present invention, and such modifications and equivalents should also be considered as falling within the scope of the present invention.
Claims (6)
1. An image fusion system for endoscope three-dimensional navigation is characterized by comprising a graphic workstation, an optical positioning mark, an optical positioning system and an endoscope system,
the graphic workstation is used for importing a three-dimensional image before examination, reconstructing and post-processing the three-dimensional image before examination, extracting image point cloud data of a focus and a surrounding tissue structure of the focus, extracting optical point cloud data of the focus and the surrounding tissue structure of the focus in an optical positioning space, matching the image point cloud data and the optical point cloud data based on a point cloud registration algorithm, and calculating spatial transformation from a coordinate system of the optical positioning system to a coordinate system of the three-dimensional image before examination; calculating the space transformation from an endoscope viewing angle coordinate system to a three-dimensional image coordinate system before examination; displaying the current visual field of the three-dimensional image corresponding to the endoscope image in real time based on the real-time space position of the endoscope, and overlapping and fusing the current visual field and the endoscope image for display;
the optical positioning mark is arranged on the endoscope;
the optical positioning system is used for tracking the spatial position of the optical positioning mark in real time;
endoscopic systems are used for endoscopy.
2. The image fusion system for endoscopic three-dimensional navigation according to claim 1, wherein said optical orientation markers are reflective beads for optical orientation or black and white checkerboard.
3. The image fusion method for endoscopic three-dimensional navigation according to claim 1, wherein the spatial transformation of the endoscopic view coordinate system to the pre-examination three-dimensional image coordinate system is obtained based on a spatial transformation of the endoscopic view coordinate system to the optical positioning marker coordinate system, a spatial transformation of the optical positioning marker coordinate system to the optical positioning system coordinate system, and a spatial transformation of the optical positioning system coordinate system to the pre-examination three-dimensional image coordinate system.
4. An image fusion method for endoscopic three-dimensional navigation, comprising:
importing the three-dimensional image before examination into a graphic workstation;
the image workstation divides the focus and the surrounding organization structure thereof, outputs a two-dimensional image of the division result, converts the two-dimensional image into a triangular patch only with a contour, and further extracts the image point cloud data of the focus and the surrounding organization structure thereof;
installing an optical positioning mark on the endoscope, and arranging an optical positioning system for tracking the spatial position of the optical positioning mark in real time;
capturing multiframe two-dimensional visual field images of a focus and surrounding tissue structures thereof by using an endoscope;
based on the multi-frame two-dimensional view images, the graphic workstation reconstructs the multi-frame two-dimensional view images into three-dimensional point cloud data of an optical positioning space, and extracts optical point cloud data of the focus and surrounding tissue structures in the optical positioning space;
the graphic workstation matches the image point cloud data and the optical point cloud data based on a point cloud registration algorithm, and calculates the space transformation from the optical positioning system coordinate system to a three-dimensional image coordinate system before inspection;
the graphic workstation calculates the space transformation from the endoscope visual angle coordinate system to the three-dimensional image coordinate system before examination;
and moving the endoscope, and displaying the current visual field of the three-dimensional image corresponding to the endoscope image in real time by the graphic workstation based on the real-time spatial position of the endoscope, and overlapping and fusing the current visual field with the endoscope image for display.
5. The image fusion method for endoscopic three-dimensional navigation according to claim 4, wherein said optical localization markers are reflective beads for optical localization or black and white checkerboard.
6. The image fusion method for endoscopic three-dimensional navigation according to claim 4, wherein the spatial transformation of the endoscopic view coordinate system to the pre-examination three-dimensional image coordinate system is obtained based on a spatial transformation of the endoscopic view coordinate system to the optical positioning marker coordinate system, a spatial transformation of the optical positioning marker coordinate system to the optical positioning system coordinate system, and a spatial transformation of the optical positioning system coordinate system to the pre-examination three-dimensional image coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110448272.8A CN115245303A (en) | 2021-04-25 | 2021-04-25 | Image fusion system and method for endoscope three-dimensional navigation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110448272.8A CN115245303A (en) | 2021-04-25 | 2021-04-25 | Image fusion system and method for endoscope three-dimensional navigation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115245303A true CN115245303A (en) | 2022-10-28 |
Family
ID=83696873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110448272.8A Pending CN115245303A (en) | 2021-04-25 | 2021-04-25 | Image fusion system and method for endoscope three-dimensional navigation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115245303A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117495693A (en) * | 2023-10-24 | 2024-02-02 | 北京仁馨医疗科技有限公司 | Image fusion method, system, medium and electronic device for endoscope |
WO2024174779A1 (en) * | 2023-02-23 | 2024-08-29 | 深圳市精锋医疗科技股份有限公司 | Endoscope registration method and device, and endoscope calibration system |
-
2021
- 2021-04-25 CN CN202110448272.8A patent/CN115245303A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024174779A1 (en) * | 2023-02-23 | 2024-08-29 | 深圳市精锋医疗科技股份有限公司 | Endoscope registration method and device, and endoscope calibration system |
CN117495693A (en) * | 2023-10-24 | 2024-02-02 | 北京仁馨医疗科技有限公司 | Image fusion method, system, medium and electronic device for endoscope |
CN117495693B (en) * | 2023-10-24 | 2024-06-04 | 北京仁馨医疗科技有限公司 | Image fusion method, system, medium and electronic device for endoscope |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11883118B2 (en) | Using augmented reality in surgical navigation | |
CN110010249B (en) | Augmented reality operation navigation method and system based on video superposition and electronic equipment | |
EP2429400B1 (en) | Quantitative endoscopy | |
JP5918548B2 (en) | Endoscopic image diagnosis support apparatus, operation method thereof, and endoscopic image diagnosis support program | |
US9498132B2 (en) | Visualization of anatomical data by augmented reality | |
KR101572487B1 (en) | System and Method For Non-Invasive Patient-Image Registration | |
JP6395995B2 (en) | Medical video processing method and apparatus | |
EP2573735B1 (en) | Endoscopic image processing device, method and program | |
Bernhardt et al. | Automatic localization of endoscope in intraoperative CT image: a simple approach to augmented reality guidance in laparoscopic surgery | |
WO2017211087A1 (en) | Endoscopic surgery navigation method and system | |
CN103356155B (en) | Virtual endoscope assisted cavity lesion examination system | |
US20060269108A1 (en) | Registration of three dimensional image data to 2D-image-derived data | |
US20140160264A1 (en) | Augmented field of view imaging system | |
WO2017027638A1 (en) | 3d reconstruction and registration of endoscopic data | |
CN114145846A (en) | Operation navigation method and system based on augmented reality assistance | |
JP5961504B2 (en) | Virtual endoscopic image generating apparatus, operating method thereof, and program | |
JP5934070B2 (en) | Virtual endoscopic image generating apparatus, operating method thereof, and program | |
Liu et al. | Global and local panoramic views for gastroscopy: an assisted method of gastroscopic lesion surveillance | |
CN115245303A (en) | Image fusion system and method for endoscope three-dimensional navigation | |
CN115953377A (en) | Digestive tract ultrasonic endoscope image fusion method and system | |
US20230248441A1 (en) | Extended-reality visualization of endovascular navigation | |
Ben-Hamadou et al. | Construction of extended 3D field of views of the internal bladder wall surface: A proof of concept | |
CN115245302A (en) | System and method for reconstructing three-dimensional scene based on endoscope image | |
CN111743628A (en) | Automatic puncture mechanical arm path planning method based on computer vision | |
Wang et al. | Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |