WO2023184527A1 - Système et procédé de reconstruction stéréoscopique non supervisée avec cohérence de disparité - Google Patents
Système et procédé de reconstruction stéréoscopique non supervisée avec cohérence de disparité Download PDFInfo
- Publication number
- WO2023184527A1 WO2023184527A1 PCT/CN2022/085012 CN2022085012W WO2023184527A1 WO 2023184527 A1 WO2023184527 A1 WO 2023184527A1 CN 2022085012 W CN2022085012 W CN 2022085012W WO 2023184527 A1 WO2023184527 A1 WO 2023184527A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- processing device
- image processing
- interpolation
- estimated
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 23
- 238000012545 processing Methods 0.000 claims abstract description 49
- 238000003384 imaging method Methods 0.000 claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 238000012800 visualization Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000002324 minimally invasive surgery Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000002432 robotic surgery Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00193—Optical arrangements adapted for stereoscopic vision
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
- H04N13/268—Image signal generators with monoscopic-to-stereoscopic image conversion based on depth image-based rendering [DIBR]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/003—Aspects relating to the "2D+depth" image format
Definitions
- Minimally invasive surgery has become an indispensable part in surgical procedures and is performed with the aid of an endoscope, which allows for viewing of the surgical site through a natural opening, a small incision, or an access port.
- conventional minimally invasive surgeries mostly employ monocular endoscopes, which only display two-dimensional (2D) images lacking depth information. Therefore, it is challenging for a surgeon to accurately move surgical instruments to specific locations inside a patient’s body. Surgeons usually perceive depth in 2D images according to motion parallax, monocular cues, and other indirect visual feedback for positioning accuracy. Stereoscopic visualization provides better imaging of the surgical site during minimally invasive surgery, providing the surgeon with depth perception.
- dual-camera endoscopes have the drawback of being much more expensive than monocular endoscopes. Due to their size, stereoscopic endoscopes may be cumbersome to use and obstruct instrument access during minimally invasive and robotic surgeries. Thus, monocular endoscopes are still preferred during such procedures, despite their imaging drawbacks. Stereoscopic reconstruction may be used on monoscope images to obtain stereoscopic effects. However, a so-called “wave” phenomenon, which is an imaging artifact appearing as a traveling wave may occur during stereoscopic reconstruction.
- the present disclosure relates to a stereoscopic visualization system for endoscopes and, more particularly, to a stereoscopic visualization system generating stereoscopic images based on monocular images.
- an image processing device for generating a stereoscopic video stream.
- the image processing device includes a processor; and a memory, having instructions stored thereon, which when executed by the processor cause the image processing device to: calculate an estimated depth map for an input image; calculate an initial disparity map based on the estimated depth map; calculate an average disparity map for the input image based on a plurality of estimated disparity maps and the initial disparity map; generate a counterpart image based on the average disparity map; and generate a stereoscopic image based on the input image and the counterpart image.
- Implementations of the above embodiment may include one or more of the following features.
- the instructions, when executed by the processor may further cause the image processing device to execute a convolutional neural network to calculate the estimated depth map.
- the image may be a frame from a video stream.
- the plurality of estimated disparity maps may be based on a plurality of adjacent frames of the video stream.
- the instructions, when executed by the processor may also cause the image processing device to calculate the average disparity map by calculating exponentially weighted moving average of the plurality of estimated disparity maps.
- the instructions when executed by the processor, may further cause the image processing device to generate the counterpart image using at least one of a bilinear interpolation, a nearest-neighbor interpolation, a linear interpolation, a bicubic interpolation, a trilinear interpolation, or an area interpolation.
- an imaging system for generating a stereoscopic image includes a monocular endoscope configured to capture an input image.
- the system also includes an image processing device having: a processor; and a memory, with instructions stored thereon, which when executed by the processor cause the image processing device to: calculate an estimated depth map for the input image; calculate an initial disparity map based on the estimated depth map; calculate an average disparity map for the input image based on a plurality of estimated disparity maps and the initial disparity map; generate a counterpart image based on the average disparity map; and generate a stereoscopic image based on each input image and the counterpart image.
- the imaging system may further include a stereoscopic display configured to display the stereoscopic image.
- the instructions when executed by the processor, may further cause the image processing device to execute a convolutional neural network to calculate the estimated depth map.
- the input image may be a frame from a video stream.
- the plurality of estimated disparity maps may be based on a plurality of adjacent frames of the video stream.
- the instructions when executed by the processor, may also cause the image processing device to calculate the average disparity map by calculating exponentially weighted moving average of the plurality of estimated disparity maps.
- the instructions when executed by the processor, may further cause the image processing device to generate the counterpart image using at least one of a bilinear interpolation, a nearest-neighbor interpolation, a linear interpolation, a bicubic interpolation, a trilinear interpolation, or an area interpolation.
- a method for generating a stereoscopic image includes calculating an estimated depth map for an input image of a video stream and calculating an initial disparity map based on the estimated depth map.
- the method also includes calculating an average disparity map for the input image based on a plurality of estimated disparity maps of a plurality of adjacent images and the initial disparity map.
- the method further includes generating a counterpart image based on the average disparity map, and generating a stereoscopic image based on each input image and the counterpart image.
- Implementations of the above embodiment may include one or more of the following features.
- the method further may also include receiving the image as a frame from a video stream.
- Calculating the average disparity map further may include calculating exponentially weighted moving average of the plurality of estimated disparity maps.
- Each adjacent image of the plurality of adjacent images may be a frame from the video steam.
- the method may further include outputting the stereoscopic image on a stereoscopic display.
- Calculating the estimated depth map further may also include executing a convolutional neural network.
- FIG. 1 is a schematic view of an imaging system according to an embodiment of the present disclosure
- FIG. 2 shows two monocular endoscopic images and their corresponding predicted depth maps according to an embodiment of the present disclosure
- FIG. 3 is flow chart of a stereoscopic image generating algorithm according to an embodiment of the present disclosure
- FIG. 4 is schematic flow diagram of the stereoscopic image generating algorithm of FIG. 3 according to an embodiment of the present disclosure.
- FIG. 5 shows three monocular endoscopic images and their corresponding predicted depth maps before and after post-processing using the stereoscopic image generating algorithm of FIG. 3 according to an embodiment of the present disclosure.
- an imaging system 10 includes a monocular endoscope 20 and an image processing device 30.
- the endoscope 20 is configured to capture 2D image data, which includes still images or a video stream having a plurality of monocular endoscopic images captured over a period of time.
- the endoscope 20 may be any device structurally configured for internally imaging an anatomical region of a body (e.g., human or animal) and may include fiber optics, lenses, miniaturized (e.g., complementary metal oxide semiconductor (CMOS) sensor) imaging systems or the like.
- CMOS complementary metal oxide semiconductor
- Suitable endoscopes 20 include, but are not limited to, any type of scope (e.g., a bronchoscope, a colonoscope, a laparoscope, etc. ) and any device similar to a scope that is equipped with an image system (e.g., an imaging cannula) .
- the endoscope 20 is coupled the image processing device 30 that is configured to receive image data from the endoscope 20 for further processing.
- the image processing device 30 may include a processor 32, which may be operably connected to a memory 34, which may include one or more of volatile, non-volatile, magnetic, optical, or electrical media, such as read-only memory (ROM) , random access memory (RAM) , electrically-erasable programmable ROM (EEPROM) , non-volatile RAM (NVRAM) , or flash memory.
- the processor 32 is configured to perform the operations, calculations, and/or set of instructions stored in the memory 34.
- the processor 32 may be any suitable processor including, but not limited to, a hardware processor, a field programmable gate array (FPGA) , a digital signal processor (DSP) , a central processing unit (CPU) , a microprocessor, a graphic processing unit ( “GPU” ) , and combinations thereof.
- a hardware processor e.g., a field programmable gate array (FPGA) , a digital signal processor (DSP) , a central processing unit (CPU) , a microprocessor, a graphic processing unit ( “GPU” ) , and combinations thereof.
- FPGA field programmable gate array
- DSP digital signal processor
- CPU central processing unit
- microprocessor e.g., a microprocessor
- GPU graphic processing unit
- the image processing device 30 is also coupled to a display 40, which may be a stereoscopic monitor and is configured to display the stereoscopic images or stereoscopic video stream generated by and transmitted from the image processing device 30.
- the display 40 may be configured to display stereoscopic images in a side-by-side format or an interlaced format to be viewed with the aid of 3D glasses.
- the display 40 may be an autostereoscopic display (e.g., using a parallax barrier, lenticular lens, or other display technologies) configured to display stereoscopic images without 3D glasses.
- the image processing device 30 receives monocular images from the endoscope 20 as input, and generates the corresponding stereoscopic images which are displayed on the display 40.
- the input monocular image may be the left image or the right image in the generated stereoscopic images and the generated image is the counterpart image (e.g., left or right) .
- the image processing device 30 is configured to execute an image generation algorithm based on deep learning, which performs stereoscopic image generation.
- the algorithm may be embodied as a software application or instructions stored in the memory 34 and executable by the processor 32. Initially, the image processing device 30 receives an input image (e.g., left image) which may be a still image or a frame of a video stream, from the endoscope 20. In embodiments, the input image may be a right image.
- Stereoscopic images may be generated by initially calculating an estimated depth map based on the input image, then combining the estimated depth map with the input image to generate the counterpart image. Thereafter, the generated counterpart image and the input image are combined to form a stereoscopic image.
- a depth map is a visualization of the distances from surface of an object to a viewpoint (e.g., an imaging device) .
- FIG. 2 shows endoscopic images 50a and 50b along with their corresponding predicted disparity maps 52a and 52b.
- the disparity maps 52a and 52b provide a visualization of the differences between left and right images (or between an image and an estimated depth map) of the same object.
- the disparities of FIG. 2 demonstrate that when the scene is stationary and the predicted disparity includes a lot of features, the noise in per-frame of a generated stereoscopic image results in the “wave” phenomenon.
- This phenomenon occurs mostly in the broad and flattened tissues, where the gradient is relatively small, such as when the scene changes at a slower pace due to movement of the endoscope 20 or the endoscope 20 is stationary.
- the relationship between stereoscopic scenes i.e., between the input image and the generated image, is amplified from frame-to-frame due to lack of motion of the endoscope 20.
- a per-frame stereoscopic reconstruction algorithm may lead to the abrupt variation in disparity details, resulting in the “wave” phenomenon (see also FIG. 5) .
- a stereoscopic image generation algorithm of this disclosure utilizes the disparity consistency to smooth the abrupt changes in a predicted disparity, thereby eliminating the “wave” phenomenon.
- FIG. 3 shows a method for stereoscopic visualization using the imaging system 10 and FIG. 4 shows a schematic diagram of the process.
- a video stream from the endoscope 20 is received at the image processing device 30.
- the video stream may be of any suitable resolution, e.g., 4K, 1080p, 720p, etc.
- the image processing device 30 receives a plurality of frames (i.e., a still monocular image) from the video stream.
- the image processing device 30 calculates an estimated depth map for the input frame 200 using a first neural network, which may be a convolutional neural network.
- the convolutional neural network may have any suitable convolutional architecture, such a U-Net architecture, which is used in medical image processing.
- a residual neural network may be used to obtain estimated depth maps.
- any suitable depth estimation algorithm may be used.
- the image processing device 30 generates an initial disparity map 202 using a plurality of depth maps obtained from each of the frames 200.
- the initial disparity map 202 may be generated a second neural network, which may be a convolutional neural network such as ShuffleNet.
- the second neural network obtains the initial disparity map 202 based on the plurality of depth maps obtained at step 102.
- the image processing device calculates an average disparity map 204 based on the initial disparity map 202 and average disparity maps 204 of adjacent frames 200.
- the average disparity map 204 may be calculated using exponentially weighted moving average, which is presented by formula (I) :
- V t represents the average of disparities between the previous (0-t-1) frames and current frame.
- d t is the disparity of current frame
- ⁇ is the weighted coefficient that may be adjusted during configuration of the image generation algorithm. If ⁇ is larger, the disparity smoothing effect becomes more significant.
- the weighted coefficient ⁇ may be from about 0.3 to about 0.9, or in further embodiments, may be from about 0.5 to about 0.8.
- this processing step may be performed on a plurality of depth maps of the adjacent frames.
- the number of adjacent frames may be from about 2 to about 10 in either direction, or in further embodiments may be about 3.
- FIG. 5 shows three adjacent frames 200 and their corresponding initial disparity maps 202 and average disparity maps 204.
- the “wave” phenomenon as shown in regions 203 has been removed after applying exponentially moving average technique of step 106.
- the minor details in the initial disparity maps 202 including some uncertainty resulting in a “wave” phenomenon are removed.
- obtaining the average disparity map 204 for each frame 200 using exponentially weighted moving average eliminates the details (i.e., artifacts) in the initial disparity map 202, which causes the “wave” phenomenon.
- the average disparity map 204 is then used along with the corresponding frame 200 to generate a corresponding counterpart frame 206 (i.e., right image) .
- the image processing device 30 samples corresponding pixels from the frame 200 and uses interpolation to generate the counterpart frame 206 based on the average disparity map 204.
- Each pixel of the counterpart frame 206 is generated based on a corresponding pixel in the average disparity map 204 and colorized using the color data from the input frame 200.
- Interpolation may use any suitable technique, including, but not limited to, an area interpolation, a nearest-neighbor interpolation, a bilinear interpolation, and/or a bicubic interpolation.
- the input frame 200 (i.e., left original image) and the right generated image are combined as a stereoscopic image and displayed on the display 40.
- Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer) .
- data storage media e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Animal Behavior & Ethology (AREA)
- Radiology & Medical Imaging (AREA)
- Optics & Photonics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Endoscopes (AREA)
Abstract
L'invention concerne un système d'imagerie (10) comprenant un endoscope monoculaire (20) configuré pour capturer une image d'entrée. Le système (10) comprend également un dispositif de traitement d'image (30) ayant : un processeur (32) ; et une mémoire (34) sur laquelle sont stockées des instructions qui, lorsqu'elles sont exécutées par le processeur (32), amènent le dispositif de traitement d'image (30) à : calculer une carte de profondeur estimée pour l'image d'entrée ; calculer une carte de disparité initiale sur la base de la carte de profondeur estimée ; calculer une carte de disparité moyenne (204) pour l'image d'entrée sur la base d'une pluralité de cartes de disparité estimées et de la carte de disparité initiale (202) ; générer une image homologue sur la base de la carte de disparité moyenne (204) ; et générer une image stéréoscopique sur la base de chaque image d'entrée et de l'image homologue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/085012 WO2023184527A1 (fr) | 2022-04-02 | 2022-04-02 | Système et procédé de reconstruction stéréoscopique non supervisée avec cohérence de disparité |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/085012 WO2023184527A1 (fr) | 2022-04-02 | 2022-04-02 | Système et procédé de reconstruction stéréoscopique non supervisée avec cohérence de disparité |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023184527A1 true WO2023184527A1 (fr) | 2023-10-05 |
Family
ID=88198827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/085012 WO2023184527A1 (fr) | 2022-04-02 | 2022-04-02 | Système et procédé de reconstruction stéréoscopique non supervisée avec cohérence de disparité |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023184527A1 (fr) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101496413A (zh) * | 2006-08-01 | 2009-07-29 | 高通股份有限公司 | 用单视场低功率移动装置实时捕获及产生立体图像及视频 |
CN101933335A (zh) * | 2008-01-29 | 2010-12-29 | 汤姆森特许公司 | 将二维图像数据转换为立体图像数据的方法和系统 |
CN102098527A (zh) * | 2011-01-28 | 2011-06-15 | 清华大学 | 一种基于运动分析的平面转立体方法及装置 |
CN106504190A (zh) * | 2016-12-29 | 2017-03-15 | 浙江工商大学 | 一种基于3d卷积神经网络的立体视频生成方法 |
CN108765479A (zh) * | 2018-04-04 | 2018-11-06 | 上海工程技术大学 | 利用深度学习对视频序列中单目视图深度估计优化方法 |
CN110798676A (zh) * | 2019-11-29 | 2020-02-14 | 苏州新光维医疗科技有限公司 | 一种利用内镜镜头动态图像形成3d视觉的方法及装置 |
US20210352261A1 (en) * | 2020-05-11 | 2021-11-11 | Niantic, Inc. | Generating stereo image date from moncular images |
-
2022
- 2022-04-02 WO PCT/CN2022/085012 patent/WO2023184527A1/fr unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101496413A (zh) * | 2006-08-01 | 2009-07-29 | 高通股份有限公司 | 用单视场低功率移动装置实时捕获及产生立体图像及视频 |
CN101933335A (zh) * | 2008-01-29 | 2010-12-29 | 汤姆森特许公司 | 将二维图像数据转换为立体图像数据的方法和系统 |
CN102098527A (zh) * | 2011-01-28 | 2011-06-15 | 清华大学 | 一种基于运动分析的平面转立体方法及装置 |
CN106504190A (zh) * | 2016-12-29 | 2017-03-15 | 浙江工商大学 | 一种基于3d卷积神经网络的立体视频生成方法 |
CN108765479A (zh) * | 2018-04-04 | 2018-11-06 | 上海工程技术大学 | 利用深度学习对视频序列中单目视图深度估计优化方法 |
CN110798676A (zh) * | 2019-11-29 | 2020-02-14 | 苏州新光维医疗科技有限公司 | 一种利用内镜镜头动态图像形成3d视觉的方法及装置 |
US20210352261A1 (en) * | 2020-05-11 | 2021-11-11 | Niantic, Inc. | Generating stereo image date from moncular images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10966592B2 (en) | 3D endoscope apparatus and 3D video processing apparatus | |
RU2556593C2 (ru) | Совмещение и навигация для эндоскопической хирургии на основе интеграции изображений | |
WO2017145788A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image, programme et système chirurgical | |
US11030745B2 (en) | Image processing apparatus for endoscope and endoscope system | |
US20140293007A1 (en) | Method and image acquisition system for rendering stereoscopic images from monoscopic images | |
Collins et al. | Towards live monocular 3D laparoscopy using shading and specularity information | |
JP5893808B2 (ja) | 立体内視鏡画像処理装置 | |
US10993603B2 (en) | Image processing device, image processing method, and endoscope system | |
KR20210146283A (ko) | 부분 깊이 맵으로부터의 합성 삼차원 이미징의 생성 | |
US9408528B2 (en) | Stereoscopic endoscope system | |
US11463676B2 (en) | Stereoscopic visualization system and method for endoscope using shape-from-shading algorithm | |
WO2023184527A1 (fr) | Système et procédé de reconstruction stéréoscopique non supervisée avec cohérence de disparité | |
CN117204791A (zh) | 一种内窥镜器械引导方法以及系统 | |
Sdiri et al. | An adaptive contrast enhancement method for stereo endoscopic images combining binocular just noticeable difference model and depth information | |
EP3130273A1 (fr) | Système et procédé de visualisation stéréoscopique pour endoscope utilisant un algorithme par ombrage de forme | |
WO2023184526A1 (fr) | Système et procédé de visualisation stéréoscopique en temps réel sur la base d'une caméra monoculaire | |
CN115623163A (zh) | 二维三维图像的采集与融合显示系统及方法 | |
CN112866670B (zh) | 基于双目时空自适应的手术3d视频稳像合成系统及方法 | |
Lo et al. | Real-time intra-operative 3D tissue deformation recovery | |
WO2023184525A1 (fr) | Système et procédé d'agrandissement d'image hybride utilisant un apprentissage profond | |
TWI538651B (zh) | Stereo visualization system and method of endoscopy using chromaticity forming method | |
JP6600442B2 (ja) | 陰影からの形状復元法を使用する単眼内視鏡立体視システムおよびその方法 | |
WO2018128028A1 (fr) | Dispositif endoscopique et procédé de génération d'image pour dispositif endoscopique | |
US20230081476A1 (en) | Method of multiple image reconstruction and registration | |
Chen et al. | Hybrid NeRF-Stereo Vision: Pioneering Depth Estimation and 3D Reconstruction in Endoscopy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22934356 Country of ref document: EP Kind code of ref document: A1 |