CN114005526A - Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment - Google Patents

Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment Download PDF

Info

Publication number
CN114005526A
CN114005526A CN202111274843.7A CN202111274843A CN114005526A CN 114005526 A CN114005526 A CN 114005526A CN 202111274843 A CN202111274843 A CN 202111274843A CN 114005526 A CN114005526 A CN 114005526A
Authority
CN
China
Prior art keywords
image
processed
images
degree
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111274843.7A
Other languages
Chinese (zh)
Inventor
曹国坤
董杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN202111274843.7A priority Critical patent/CN114005526A/en
Publication of CN114005526A publication Critical patent/CN114005526A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Endoscopes (AREA)

Abstract

The application discloses a method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scenes and related equipment, which are used for solving the problems that the 3D images and the 2D images are continuously and manually switched to be displayed in the surgery process, the operation is troublesome, the surgery efficiency is reduced, and the time is prolonged. The method comprises the following steps: acquiring multiframe images collected by an endoscope at specified time intervals; determining whether the endoscope is in a moving state based on the degree of change of the image content of the plurality of frames of images; if the mobile terminal is in the moving state, determining to output a 2D image; and if the image is in the static state, determining to output the 3D image. By using the method, the 2D image and the 3D image can be adaptively switched and displayed according to the scene, and the 3D image and the 2D image are not required to be manually switched and displayed by a doctor, so that the trouble of repeatedly and manually operating the endoscope display in the operation process is avoided, the operation time is shortened, the operation efficiency is improved, and the use experience of a user is improved.

Description

Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment
Technical Field
The application relates to the technical field of medical treatment, in particular to a method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on an endoscopic surgery scene and related equipment.
Background
The traditional 2D endoscope only can display two-dimensional images, but the two-dimensional images seen by eyes have errors with the actual three-dimensional relationship, and the condition of incompatibility of hands and eyes is inevitable in the operation; the 3D endoscope restores the three-dimensional operation visual field in real vision, has the amplification effect, overcomes the visual difference and inconvenience caused by the traditional 2D endoscope, can more truly display the tissue part of the human body, improves the operation speed and accuracy and brings convenience to doctors. However, in the actual use process, the long-time display of the 3D image may cause problems such as nausea, vomiting, dizziness, visual fatigue, etc., which affect the operation effect and may result in poor experience.
In the prior art, a doctor needs to manually operate an endoscope display to switch between 2D image display and 3D image display. Although the method is simple to operate, doctors are required to continuously operate the display in the operation process, the operation time and efficiency are affected due to the fact that the operation is troublesome, and the actual experience is not good.
Disclosure of Invention
The application aims to provide a method for switching 2D and 3D images based on endoscopic surgery scenes and related equipment, which are used for solving the problems that the 3D images and the 2D images are continuously and manually switched to be displayed in the surgery process, the operation is troublesome, the surgery efficiency is reduced, and the surgery time is prolonged.
In a first aspect, the present application provides a method of switching 2D, 3D images based on an endoscopic surgical scene, the method comprising:
acquiring multiframe images collected by an endoscope at specified time intervals;
determining whether the endoscope is in a moving state based on the degree of change of the image content of the plurality of frames of images;
if the mobile terminal is in the moving state, determining to output a 2D image;
and if the image is in the static state, determining to output the 3D image.
In a possible implementation manner, the determining whether the endoscope is in a moving state based on the degree of change of the image content of the multiple frames of images specifically includes:
acquiring at least one group of image pairs from the multi-frame images, wherein each group of image pairs consists of two adjacent frames of images in the multi-frame images;
sequentially determining the change degree of the image content of each image pair according to the sequence of the acquisition time, and if the change degree of the image content of any image pair is smaller than a degree threshold value, determining that the endoscope is in a static state;
if the change degree of the image content of any image pair is larger than or equal to the degree threshold value, determining the change degree of the image content of the next image pair; and if the degree of change of the image content of each image pair is greater than or equal to the degree threshold value, determining that the endoscope is in a moving state.
In a possible implementation manner, the determining whether the endoscope is in a moving state based on the degree of change of the image content of the multiple frames of images specifically includes:
acquiring at least one group of image pairs from the multi-frame images, wherein each group of image pairs consists of two adjacent frames of images in the multi-frame images;
determining the change degree of the image content of each image pair in parallel;
if the change degree of the image content of any image pair is smaller than a degree threshold value, determining that the endoscope is in a static state;
and if the change degree of the image content of each image pair is larger than or equal to the degree threshold value, determining that the endoscope is in a moving state.
In a possible implementation manner, for any image pair, determining a degree of change in image content of the image pair specifically includes:
selecting at least one sampling area with the same position in the image pair;
for each of the at least one positionally identical sampling region, the following is performed:
determining the sum of each component to be processed in the three components to be processed of the R value, the G value and the B value in the sampling area of each frame image in the image pair and the average value of each component to be processed;
calculating the deviation of the sum of each component to be processed in the three components to be processed of the R value, the G value and the B value in the sampling area in the image pair and the deviation of the average value of each component to be processed based on the sum of each component to be processed and the average value of each component to be processed;
if the deviation of the sum of each component to be processed in the R value, the G value and the B value of any sampling region in the image pair and the deviation of the average value of each component to be processed are smaller than a preset deviation threshold, determining that the change degree of the image content of the image pair is smaller than a degree threshold;
and if the deviation of at least one value of the deviation of the sum of each to-be-processed component in the R value, G value and B value of each sampling region in the image pair and the deviation of the average value of each to-be-processed component is greater than or equal to a preset deviation threshold value, determining that the change degree of the image content of the image pair is greater than or equal to a degree threshold value.
In a possible implementation, the calculating the deviation of the sum of each of the three components to be processed of the R value, the G value, and the B value in the sampling region in the image pair and the deviation of the average value of each component to be processed specifically includes:
for any one of three components to be processed of an R value, a G value and a B value in the sampling region in the image pair, executing the following operations:
subtracting the sum of the components to be processed in the sampling region in the next frame image from the sum of the components to be processed in the sampling region in the previous frame image in the image pair to obtain a difference value of the sums of the components to be processed;
dividing the difference value of the sum of the components to be processed by the sum of the components to be processed in the sampling area in the next frame of image to obtain the deviation of the sum of the components to be processed in the sampling area in the image pair;
subtracting the average value of the components to be processed in the sampling area in the next frame image from the average value of the components to be processed in the sampling area in the previous frame image in the image pair to obtain the difference value of the average values of the components to be processed;
and dividing the difference value of the average value of the components to be processed by the average value of the components to be processed in the sampling area in the next frame of image to obtain the deviation of the average value of the components to be processed in the sampling area in the image pair.
In a possible implementation manner, the multi-frame image specifically includes:
a plurality of frames of consecutive images; or the like, or, alternatively,
sampling multi-frame images with n frames at intervals; wherein n is a positive integer.
In a possible implementation manner, for any image pair, determining a degree of change in image content of the image pair specifically includes:
selecting at least one area of biological tissue from a previous frame image in the image pair as at least one template area, and recording a first position coordinate of each template area in the previous frame image;
detecting each template area in a next frame image in the image pair, and recording a second position coordinate of each template area in the next frame image;
calculating a displacement of each of the template regions based on the first and second position coordinates;
if the displacement of any template area is smaller than a preset displacement threshold, determining that the change degree of the image content of the image pair is smaller than a degree threshold;
and if the displacement of each template area is greater than or equal to a preset displacement threshold, determining that the change degree of the image content of the image pair is greater than or equal to a degree threshold.
In one possible embodiment, the method further comprises:
and if the template region cannot be detected in the next frame of image in the image pair, determining that the displacement of the template region is greater than or equal to a preset displacement threshold value.
In one possible embodiment, after determining that the endoscope is in the moving state and before outputting the 2D image based on the degree of change in the image content of the plurality of frames of images, the method further includes:
if the 3D image is switched to the 2D image for output, acquiring a last frame image before switching, and continuously displaying the last frame image before switching as the current image; and the number of the first and second electrodes,
switching from the 3D image to the 2D image within a duration threshold;
after the 2D image is switched to, ending displaying the current image, and outputting the 2D image;
after determining that the endoscope is in a still state and before outputting a 3D image based on the degree of change in the image content of the plurality of frames of images, the method further comprises:
if the 2D image is switched to the 3D image for output, acquiring a last frame of image before switching, and continuously displaying the last frame of image before switching as the current image; and the number of the first and second electrodes,
switching from the 2D image to the 3D image within a duration threshold;
and after the 3D image is switched, ending displaying the current image, and outputting the 3D image.
In a second aspect, the present application provides an endoscopic device comprising an endoscope, a processor, a display, and a memory:
the endoscope is used for acquiring images;
the memory for storing a computer program executable by the processor;
the display is used for displaying the image;
the processor is connected with the memory and configured to execute the instructions to implement the method of switching 2D, 3D images based on an endoscopic surgical scene according to any of the first aspect above.
In a third aspect, the present application provides a computer readable storage medium having instructions which, when executed by an endoscopic device, enable the endoscopic device to perform a method of switching 2D, 3D images based on an endoscopic surgical scene as described in any one of the first aspects above.
In a fourth aspect, the present application provides a computer program product comprising a computer program:
the computer program when executed by a processor implements a method of switching 2D, 3D images based on an endoscopic surgical scene as described in any one of the first aspects above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
according to the embodiment of the application, multi-frame images collected by an endoscope are acquired at intervals of specified duration; determining whether the endoscope is in a moving state based on the degree of change of the image content of the plurality of frames of images; if the mobile terminal is in the moving state, determining to output a 2D image; and if the image is in the static state, determining to output the 3D image. Therefore, the 2D image and the 3D image can be switched and displayed in a self-adaptive mode according to scenes, the 3D image and the 2D image are not required to be switched and displayed manually by a doctor, the trouble that the endoscope display is operated manually repeatedly in the operation process is avoided, the operation time is shortened, the operation efficiency is improved, and the use experience of a user is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an application scene diagram of a method for switching 2D and 3D images based on an endoscopic surgical scene according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an endoscopic apparatus provided in an embodiment of the present application;
fig. 3A is a schematic flowchart of a method for switching 2D and 3D images based on an endoscopic surgical scene according to an embodiment of the present application;
fig. 3B is a schematic diagram of a plurality of frames of images collected by an endoscope according to an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating a process for determining a degree of change in image content of an arbitrary image pair according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of sampling regions with the same position in an image pair according to an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating another method for determining a degree of change in image content of an arbitrary image pair according to an embodiment of the present disclosure;
fig. 7 is a schematic process diagram of switching a 2D image into a 3D image according to an embodiment of the present application;
fig. 8 is a schematic process diagram of switching a 3D image to a 2D image according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Also, in the description of the embodiments of the present application, "/" indicates an inclusive meaning unless otherwise specified, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the features, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
Endoscope: the device integrates the traditional optical, human engineering, precision machinery, modern electronics, mathematics and software into a whole, and is provided with an image sensor, an optical lens, a light source for illumination, a mechanical device and the like. The endoscope can enter the human body through a natural duct of the human body or enter the human body through a small incision made by operation. When in use, the endoscope is introduced into a pre-examined organ, the change of the relevant part can be directly observed, and the lesion which can not be displayed by X-ray can be seen by the endoscope, so the endoscope is very useful for doctors. For example, with the aid of an endoscopist, an ulcer or tumor in the stomach can be observed, and an optimal treatment plan can be developed accordingly. The quality of the image displayed by the endoscope directly affects the using effect of the endoscope and also marks the development level of the endoscope technology. Currently, 2D endoscopes are widely used in the market, and 3D endoscopes are relatively few.
The traditional 2D endoscope only can display two-dimensional images, but errors exist between the two-dimensional images seen by eyes and the actual three-dimensional relationship, and the condition of incompatibility of hands and eyes is inevitable in the operation. Compared with the traditional 2D endoscope, the 3D endoscope has the advantages of fast positioning, fast image grabbing and more depth feeling. The 3D endoscope restores the three-dimensional operation visual field in the real vision, has the amplification effect, overcomes the vision difference and inconvenience caused by the traditional 2D endoscope, can more truly display the tissue part of the human body, improves the operation speed and accuracy and brings convenience to doctors. However, in the actual use process, the long-time display of the 3D image may cause problems such as nausea, vomiting, dizziness, visual fatigue, etc., which affect the operation effect and may cause poor experience.
In the prior art, a doctor needs to manually operate an endoscope display to switch between 2D image display and 3D image display. Although the method is simple to operate, doctors are required to continuously operate the display in the operation process, the operation time and efficiency are affected due to the fact that the operation is troublesome, and the actual experience is not good.
In view of the above, the present application provides a method and related device for switching 2D and 3D images based on an endoscopic surgery scene, so as to solve the problems that the operation is troublesome, the surgery efficiency is reduced, and the surgery time is prolonged, as the 3D image and the 2D image are continuously manually switched during the surgery.
The inventive concept of the present application can be summarized as follows: acquiring multiframe images collected by an endoscope at specified time intervals; determining whether the endoscope is in a moving state based on the degree of change of the image content of the plurality of frames of images; if the mobile terminal is in the moving state, determining to output a 2D image; and if the image is in the static state, determining to output the 3D image. Therefore, the state of the endoscope can be judged according to the change degree of the image content of the multi-frame images collected by the endoscope, the 3D image can be switched to the 2D image for display if the endoscope moves, and the 2D image can be switched to the 3D image if the endoscope moves to be static, so that the 2D image and the 3D image can be adaptively switched and displayed, the trouble that a doctor repeatedly and manually operates the endoscope to switch the images in the operation process is avoided, the operation time is shortened, and the operation efficiency is improved.
After the main inventive concepts of the embodiments of the present application are introduced, some simple descriptions are provided below for application scenarios to which the technical solutions of the embodiments of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiments of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Reference is made to fig. 1, which is an application scene diagram of a method for switching 2D and 3D images based on an endoscopic surgical scene according to an embodiment of the present application. In the application scenario diagram, a doctor 101, an endoscopic apparatus 102, and a patient 103 are included. Wherein:
the endoscope device 102 is used for entering and exiting the body of the patient 103, scanning and collecting images of various tissue parts in the body, determining the state of the endoscope device 102 according to the image change degree of the collected multi-frame images, processing the collected images according to the determined state, displaying the processed images and providing the processed images for the doctor 101 to view.
A doctor 101 for controlling the endoscope apparatus 102 to enter and exit the patient 103 and controlling the endoscope apparatus 102 to move in the patient 103, and searching for a lesion between organs in the patient 103 by viewing the image displayed by the endoscope apparatus 102, and performing operations such as surgical cutting and reconstruction after determining the lesion.
The patient 103, which is used to agree with the doctor 101 to control the endoscope apparatus 102 to go in and out of the body of the doctor, and to agree with the doctor 101 to perform an operation for the doctor.
Of course, the method provided in the embodiment of the present application is not limited to the application scenario shown in fig. 1, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described in the following method embodiments, and will not be described in detail herein.
Referring to fig. 2, a schematic structural diagram of an endoscope apparatus according to an embodiment of the present application is provided. The parts can be implemented by partial modules or functional components of the endoscope apparatus 102 shown in fig. 1, and only the main components will be described below, while other components, such as a memory, a controller, a control circuit, and the like, will not be described herein again.
As shown in fig. 2, the endoscopic device 102 includes an endoscope 210 and an endoscopic display 220. The endoscope 210 has an image sensor, an optical lens, a light source, a mechanical device, etc. (not shown in the figure), can enter the body through natural pores of the human body, and is mainly used for collecting multi-frame images of human tissue parts at specified intervals. The endoscope 210 can be used to see a lesion that cannot be visualized by X-rays, which is very useful for a doctor.
The endoscope display 220 is composed of an image processing module 221 and a module screen 222. The endoscope display 220 is mainly used as a dedicated display for receiving and displaying images collected by the endoscope 210.
The image processing module 221 includes an image motion determination module 2211 and a display control module 2212, and is mainly used for processing the image acquired by the endoscope 210. The image processing module 221 may be a core, or may be an fpga (field Programmable Gate array) chip, which is not limited in this application.
The image motion determination module 2211 is mainly used to determine the degree of change of the image captured by the endoscope 210, determine whether the endoscope is in a relatively stationary state or a fast moving state, and send a control instruction for displaying the image to the display control module 2212. And the display control module 2212 is mainly used to control whether the image is displayed in a 3D form or a 2D form.
The module screen 222 may include a plurality of backlight sections, each of which may be illuminated to illuminate the endoscope display 220, and a display portion (not shown) of a TCON (Timer Control Register).
In order to facilitate understanding of the method for switching 2D and 3D images based on an endoscopic surgical scene provided in the embodiments of the present application, the following description will be further described with reference to the accompanying drawings.
Fig. 3A is a schematic flowchart of a method for switching 2D and 3D images based on an endoscopic surgical scene according to an embodiment of the present application. As shown in fig. 3A, the method includes the steps of:
in step 301, a plurality of frames of images captured by an endoscope are acquired at specified intervals.
In a possible implementation manner, the multi-frame image specifically includes: a plurality of frames of consecutive images; or, sampling multi-frame images with n frames of intervals; wherein n is a positive integer.
In a possible implementation manner, when observing the change of the image content of the multi-frame image, acquiring the multi-frame continuous image can observe the change of the image content more sensitively and more finely than acquiring the multi-frame image with the sampling interval of n frames, so that whether the multi-frame continuous image is acquired or the sampling interval of n frames is set according to the actual use condition, and one frame of image is acquired every n frames.
N may also be set according to an actual situation, which is not limited in this application.
In a possible embodiment, determining whether the output image is a 2D image or a 3D image, first, the state of the endoscope needs to be determined, if the endoscope is in a moving state, the image contents of the multi-frame images collected by the endoscope are different and have a large change, and if the endoscope is in a static state, the image contents of the multi-frame images collected by the endoscope have a small change or almost no change, so that the state of the endoscope needs to be determined by determining the degree of change of the image contents of the multi-frame images according to the multi-frame images collected by the endoscope.
In step 302, it is determined whether the endoscope is in a moving state based on the degree of change in the image content of the plurality of frames of images, and in step 303, it is determined to output a 2D image if the endoscope is in a moving state, and in step 304, it is determined to output a 3D image if the endoscope is in a still state.
In a possible embodiment, in order to avoid a situation that a temporary erroneous judgment is made as to whether the endoscope moves when two adjacent frames of images are changed greatly and the picture after the two frames of images is stable and does not change greatly, that is, when only two frames of images are used to judge whether the endoscope moves, the accuracy of the judgment structure needs to be improved. Therefore, in order to compensate for the deficiency, the embodiment of the present application selects the degree of change of at least one set of image pairs for comparison, and finally determines the state of the endoscope according to the comparison result of the degree of change of at least one set of image pairs. Can be implemented as follows:
acquiring at least one group of image pairs from the multi-frame images, wherein each group of image pairs consists of two adjacent frames of images in the multi-frame images;
sequentially determining the change degree of the image content of each image pair according to the sequence of the acquisition time, and if the change degree of the image content of any image pair is smaller than a degree threshold value, determining that the endoscope is in a static state;
if the change degree of the image content of any image pair is larger than or equal to the degree threshold value, determining the change degree of the image content of the next image pair; and if the degree of change of the image content of each image pair is greater than or equal to the degree threshold value, determining that the endoscope is in a moving state.
As shown in fig. 3B, a schematic diagram of a plurality of frames of images collected by an endoscope provided in the embodiment of the present application is shown. For example, in fig. 3B, 3 frames of images acquired by the endoscope are acquired in time axis sequence, that is, one frame of image is acquired at time T-1, one frame of image is acquired at time T, and one frame of image is acquired at time T + 1. Firstly determining the change degree of the image content of an image pair consisting of a frame of image acquired at the T-1 moment and a frame of image acquired at the T moment according to the acquisition time sequence, and if the change degree of the image content of the image pair is smaller than a degree threshold value, determining that the endoscope is in a static state; if the change degree of the image content of the image pair is larger than or equal to the degree threshold value, determining the change degree of the image content of the image pair consisting of one frame of image acquired at the time T and one frame of image acquired at the time T +1, and finally determining that the endoscope is in a moving state if the change degree of the image content of the image pair is larger than or equal to the degree threshold value; and if the change degree of the image content of the image pair is smaller than the degree threshold value, determining that the endoscope is in a static state.
Therefore, the endoscope state can be determined by sequentially determining the change degree of the image content of each image pair according to the sequence of the acquisition time and sequentially comparing the results of each image pair.
The method can determine that the endoscope is in a static state when the degree of change of the image content of any image pair is smaller than the degree threshold value, and the degree of change of the image content of the subsequent image pair does not need to be determined. However, if the degree of change in the image content of any image pair is greater than or equal to the degree threshold, the degree of change of the next image pair needs to be determined, and comparison needs to be performed once and again, which is inefficient.
Thus, in one possible embodiment, the endoscope state may also be determined by simultaneously determining the degree of change in image content of each image pair, as permitted by the processing capabilities of the endoscope processor. Acquiring the degree of change of the image content based on the multi-frame image, and determining whether the endoscope is in a moving state, specifically comprising:
acquiring at least one group of image pairs from the multi-frame images, wherein each group of image pairs consists of two adjacent frames of images in the multi-frame images;
determining the change degree of the image content of each image pair in parallel;
if the change degree of the image content of any image pair is smaller than a degree threshold value, determining that the endoscope is in a static state;
and if the change degree of the image content of each image pair is larger than or equal to the degree threshold value, determining that the endoscope is in a moving state.
For example, the degree of change in the image content of the image pair composed of the one frame image acquired at time T-1 and the one frame image acquired at time T shown in fig. 3B and the degree of change in the image content of the image pair composed of the one frame image acquired at time T and the one frame image acquired at time T +1 are determined simultaneously, and then the results of the degrees of change in the image contents of the two sets of image pairs are compared with the degree threshold value simultaneously. If the change degree of the image content of one image pair in the two image pairs is smaller than a degree threshold value, determining that the endoscope is in a static state; and if the change degrees of the image contents of the two groups of image pairs are both larger than or equal to the degree threshold value, determining that the endoscope is in a moving state.
Therefore, the change degree of the image content of each image pair can be determined in parallel, and the change degrees of the image content of all the image pairs can be obtained at the same time, so that the state of the endoscope can be determined only by comparing all the results once, multiple times of comparison is not required, the time is saved, and the efficiency is improved.
In a possible implementation manner, no matter the method for sequentially determining the degree of change of the image content of each image pair according to the collection time sequence or the method for determining the degree of change of the image content of each image pair in parallel, the degree of change of the image content of the image pair needs to be determined, so with reference to fig. 4, a schematic flow chart for determining the degree of change of the image content of any image pair is provided in the embodiment of the present application. For any image pair, determining the degree of change of the image content of the image pair, specifically comprising the following steps:
in step 401, at least one sampling region with the same position is selected from the image pair.
In the two frames of images of the image pair shown in fig. 5, the edge position of the image is selected as a sampling region in consideration of the size of the visual field when the endoscope capturing device captures the image, and if the edge position does not change significantly, it indicates that the captured visual field does not change greatly, i.e., the captured image does not change greatly, and in addition, since the center position of the image is generally a key focus of the operation in the surgical procedure, exemplary 3 small blocks with the same size are selected as sampling regions at the three positions of the two frames of images, which are respectively denoted as region 1, region 2, region 3, region 1 ', region 2 ', and region 3 '. The position of the upper left corner coordinate of the first sampling area of the two frames of images is (a, b), the position of the upper left corner coordinate of the second sampling area is (c, d), and the position of the upper left corner coordinate of the third sampling area is (e, f), so the positions of the three sampling areas of the two frames of images are the same, that is, area 1 and area 1 ' are sampling areas with the same position, area 2 and area 2 ' are sampling areas with the same position, and area 3 ' are sampling areas with the same position.
In step 402, for each of at least one identically located sample region, the following is performed: determining the sum of each component to be processed in the three components to be processed of the R value, the G value and the B value in the sampling region of each frame image in the image pair and the average value of each component to be processed, that is, adding each component to be processed of each pixel point in the sampling region to obtain the sum of each component to be processed, and recording the sum as R1sum, G1sum and B1sum, and calculating the average value of each component to be processed of each pixel point in the sampling region as R1avg, G1avg and B1 avg.
It can be implemented that, in fig. 5, the sum of each component to be processed in three components to be processed, namely, the R value, the G value, and the B value in the region 1, the region 2, the region 3, the region 1 ', the region 2 ', and the region 3 ', and the average value of each component to be processed are respectively counted as:
the previous frame image:
region 1: TR1sum, TG1sum, TB1sum, TR1avg, TG1avg, TB1avg
Region 2: TR2sum, TG2sum, TB2sum, TR2avg, TG2avg, TB2avg
Region 3: TR3sum, TG3sum, TB3sum, TR3avg, TG3avg, TB3avg
The next frame image:
a region 1': r1sum, G1sum, B1sum, R1avg, G1avg, B1avg
A region 2': r2sum, G2sum, B2sum, R2avg, G2avg, B2avg
A region 3': r3sum, G3sum, B3sum, R3avg, G3avg, B3avg
In step 403, based on the sum of each component to be processed and the average value of each component to be processed, the deviation of the sum of each component to be processed and the deviation of the average value of each component to be processed in the three components to be processed of the R value, the G value and the B value in the sampling region in the image pair are calculated.
In one possible embodiment, determining the deviation of the sum of each component and the deviation of the mean of each component to be processed can be implemented as:
for any one of three components to be processed of an R value, a G value and a B value in the sampling region in the image pair, executing the following operations:
subtracting the sum of the components to be processed in the sampling region in the next frame image from the sum of the components to be processed in the sampling region in the previous frame image in the image pair to obtain a difference value of the sums of the components to be processed;
dividing the difference value of the sum of the components to be processed by the sum of the components to be processed in the sampling area in the next frame of image to obtain the deviation of the sum of the components to be processed in the sampling area in the image pair;
subtracting the average value of the components to be processed in the sampling area in the next frame image from the average value of the components to be processed in the sampling area in the previous frame image in the image pair to obtain the difference value of the average values of the components to be processed;
and dividing the difference value of the average value of the components to be processed by the average value of the components to be processed in the sampling area in the next frame of image to obtain the deviation of the average value of the components to be processed in the sampling area in the image pair.
In step 404, comparing the deviation of the sum of each component to be processed in the three components to be processed of the R value, the G value and the B value of any sampling region in the image pair and the deviation of the average value of each component to be processed to determine whether the deviation of at least one value is greater than or equal to a preset deviation threshold value; if the deviation of the sum of each component to be processed in the three components to be processed of the R value, the G value and the B value of any sampling region in the image pair and the deviation of the average value of each component to be processed are all smaller than a preset deviation threshold, in step 405, it is determined that the degree of change of the image content of the image pair is smaller than a degree threshold;
if the deviation of at least one of the deviation of the sum of each to-be-processed component of the three to-be-processed components of the R value, the G value and the B value of each sampling region in the image pair and the deviation of the average value of each to-be-processed component is greater than or equal to a preset deviation threshold, in step 406, it is determined that the degree of change of the image content of the image pair is greater than or equal to a degree threshold.
It may be implemented to calculate the deviation of the sum of each of the three to-be-processed components of R value, G value, B value and the deviation of the average value of each to-be-processed component in the sampling region with the same position of the two frames of images in the image pair in fig. 5, which may be respectively expressed as:
Figure BDA0003329782580000151
Figure BDA0003329782580000152
Figure BDA0003329782580000153
if all the results are smaller than K, that is, the results obtained from the six sets of data are all within the threshold range, it is determined that the degree of change of the image content of the image pair in fig. 5 is smaller than the degree threshold. If one of the results obtained from the six sets of data is greater than or equal to K, the degree of change in the image content of the image pair in fig. 5 is considered to be greater than or equal to the degree threshold. Wherein, since the calculated deviation is a ratio, the predetermined deviation threshold can be set to the same value.
Therefore, the degree of change of the image content can be determined by comparing the degree of change of the pixel data of the image in the sampling area at the same position in the image captured by the endoscope, the state of the endoscope is judged, and the output image is determined to be a 2D image or a 3D image.
In a possible implementation, the status of the endoscope can also be determined by observing the position change of the same biological tissue in multiple frames of images, so with reference to fig. 6, another flow chart for determining the degree of change of the image content of any image pair is provided for the present embodiment. For any image pair, determining the degree of change of the image content of the image pair, specifically comprising the following steps:
in step 601, at least one region of biological tissue is selected from a previous image in the image pair as at least one template region, and first position coordinates of each template region in the previous image are recorded.
In step 602, each template region is detected in a subsequent frame image of the image pair, and a second position coordinate of each template region in the subsequent frame image is recorded.
In a possible implementation manner, if the template region is not detected in the image of the next frame in the image pair, it is determined that the displacement of the template region is greater than or equal to a preset displacement threshold.
In step 603, a displacement of each of the template regions is calculated based on the first position coordinates and the second position coordinates.
For example, it can be implemented as: the first position coordinates of the template region in the previous frame image are (a, b), the second position coordinates of the template region in the next frame image are (c, D), and the distance between two coordinate points, i.e. the displacement D of the template region, is:
Figure BDA0003329782580000161
other methods may be used to calculate the displacement of the template region, which is not limited in this application.
In step 604, it is determined whether the displacement of any of the template regions is smaller than a preset displacement threshold; if the displacement of any template region is smaller than a preset displacement threshold, in step 605, it is determined that the degree of change of the image content of the image pair is smaller than a degree threshold; if the displacement of each template region is greater than or equal to the preset displacement threshold, in step 606, it is determined that the degree of change of the image content of the image pair is greater than or equal to the degree threshold.
Therefore, the change degree of the image content of the image pair can be determined through the position change of the same biological tissue in the multi-frame images, so that whether the endoscope is in a moving state or not is determined, and a 2D image or a 3D image is output.
In a possible embodiment, a phenomenon such as a black screen or a click during image switching in a process of performing 2D display and 3D display by an endoscope according to the above method is prevented, and in order to ensure smooth image switching and make it difficult for a user to perceive an actual switching process, in the embodiment of the present application, after determining that the endoscope is in a moving state and before outputting a 2D image based on a change degree of image content of the plurality of frames of images, if switching from the 3D image to the 2D image output, a last frame of image before switching is acquired, and the last frame of image before switching is continuously displayed as the current image; and, switching from the 3D image to the 2D image within a duration threshold; and after the 2D image is switched to, ending displaying the current image, and outputting the 2D image.
Similarly, after determining that the endoscope is in a still state and before outputting a 3D image based on the degree of change of the image content of the multiple frames of images, in this embodiment of the present application, when switching from a 2D image to the 3D image output, a last frame of image before switching may be acquired, and the last frame of image before switching may be continuously displayed as the current image; and, switching from the 2D image to the 3D image within a duration threshold; and after the 3D image is switched, ending displaying the current image, and outputting the 3D image.
Fig. 7 is a schematic diagram illustrating a process of switching a 2D image to a 3D image according to an embodiment of the present application. As shown in fig. 8, a schematic process diagram for switching a 3D image to a 2D image is provided in the embodiment of the present application. In fig. 7 and 8, after the state of the endoscope is determined based on the degree of change of the image content of the multiple frames of images, the last frame of image before switching is read, the image is left to be currently displayed, and the endoscope switches from a 3D image to a 2D image or from a 2D image to a 3D image.
The time length threshold value can be changed according to the processing capability of the processor of the endoscope, and the smaller the time length threshold value is, the faster the switching process is, and the smoother the image switching is.
Therefore, smooth switching of images can be achieved, the phenomena of black screen or blockage and the like in the image switching process are prevented, and the actual switching process is difficult to be perceived by a user.
Based on the foregoing description, the embodiment of the present application acquires multiple frames of images acquired by an endoscope by specifying a time length per interval; determining the change degree of the image content of the multi-frame images in various ways, and determining whether the endoscope is in a moving state or not based on the change degree of the image content of the multi-frame images; if the mobile terminal is in the moving state, determining to output a 2D image; and if the image is in the static state, determining to output the 3D image. Therefore, the state of the endoscope can be judged according to the change degree of the image content of the multi-frame images collected by the endoscope, the 2D images and the 3D images can be adaptively switched and displayed according to the scene, the trouble that a doctor repeatedly and manually operates the endoscope to switch the images in the operation process is avoided, the operation time is shortened, the operation efficiency is improved, and the user experience is prompted.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of switching 2D, 3D images based on an endoscopic surgical scene, the method comprising:
acquiring multiframe images collected by an endoscope at specified time intervals;
determining whether the endoscope is in a moving state based on the degree of change of the image content of the plurality of frames of images;
if the mobile terminal is in the moving state, determining to output a 2D image;
and if the image is in the static state, determining to output the 3D image.
2. The method according to claim 1, wherein the determining whether the endoscope is in a moving state based on the degree of change in the image content of the plurality of frames of images specifically comprises:
acquiring at least one group of image pairs from the multi-frame images, wherein each group of image pairs consists of two adjacent frames of images in the multi-frame images;
sequentially determining the change degree of the image content of each image pair according to the sequence of the acquisition time, and if the change degree of the image content of any image pair is smaller than a degree threshold value, determining that the endoscope is in a static state;
if the change degree of the image content of any image pair is larger than or equal to the degree threshold value, determining the change degree of the image content of the next image pair; and if the degree of change of the image content of each image pair is greater than or equal to the degree threshold value, determining that the endoscope is in a moving state.
3. The method according to claim 1, wherein the determining whether the endoscope is in a moving state based on the degree of change in the image content of the plurality of frames of images specifically comprises:
acquiring at least one group of image pairs from the multi-frame images, wherein each group of image pairs consists of two adjacent frames of images in the multi-frame images;
determining the change degree of the image content of each image pair in parallel;
if the change degree of the image content of any image pair is smaller than a degree threshold value, determining that the endoscope is in a static state;
and if the change degree of the image content of each image pair is larger than or equal to the degree threshold value, determining that the endoscope is in a moving state.
4. The method according to claim 2 or 3, wherein determining, for any pair of images, a degree of change in image content of the pair of images specifically comprises:
selecting at least one sampling area with the same position in the image pair;
for each of the at least one positionally identical sampling region, the following is performed:
determining the sum of each component to be processed in the three components to be processed of the R value, the G value and the B value in the sampling area of each frame image in the image pair and the average value of each component to be processed;
calculating the deviation of the sum of each component to be processed in the three components to be processed of the R value, the G value and the B value in the sampling area in the image pair and the deviation of the average value of each component to be processed based on the sum of each component to be processed and the average value of each component to be processed;
if the deviation of the sum of each component to be processed in the R value, the G value and the B value of any sampling region in the image pair and the deviation of the average value of each component to be processed are smaller than a preset deviation threshold, determining that the change degree of the image content of the image pair is smaller than a degree threshold;
and if the deviation of at least one value of the deviation of the sum of each to-be-processed component in the R value, G value and B value of each sampling region in the image pair and the deviation of the average value of each to-be-processed component is greater than or equal to a preset deviation threshold value, determining that the change degree of the image content of the image pair is greater than or equal to a degree threshold value.
5. The method according to claim 4, wherein the calculating of the deviation of the sum of each of the three components to be processed of R, G and B values and the deviation of the average value of each component to be processed in the sampling region in the image pair specifically comprises:
for any one of three components to be processed of an R value, a G value and a B value in the sampling region in the image pair, executing the following operations:
subtracting the sum of the components to be processed in the sampling region in the next frame image from the sum of the components to be processed in the sampling region in the previous frame image in the image pair to obtain a difference value of the sums of the components to be processed;
dividing the difference value of the sum of the components to be processed by the sum of the components to be processed in the sampling area in the next frame of image to obtain the deviation of the sum of the components to be processed in the sampling area in the image pair;
subtracting the average value of the components to be processed in the sampling area in the next frame image from the average value of the components to be processed in the sampling area in the previous frame image in the image pair to obtain the difference value of the average values of the components to be processed;
and dividing the difference value of the average value of the components to be processed by the average value of the components to be processed in the sampling area in the next frame of image to obtain the deviation of the average value of the components to be processed in the sampling area in the image pair.
6. The method according to claim 1, wherein the multi-frame image specifically comprises:
a plurality of frames of consecutive images; or the like, or, alternatively,
sampling multi-frame images with n frames at intervals; wherein n is a positive integer.
7. The method according to claim 2 or 3, wherein determining, for any pair of images, a degree of change in image content of the pair of images specifically comprises:
selecting at least one area of biological tissue from a previous frame image in the image pair as at least one template area, and recording a first position coordinate of each template area in the previous frame image;
detecting each template area in a next frame image in the image pair, and recording a second position coordinate of each template area in the next frame image;
calculating a displacement of each of the template regions based on the first and second position coordinates;
if the displacement of any template area is smaller than a preset displacement threshold, determining that the change degree of the image content of the image pair is smaller than a degree threshold;
and if the displacement of each template area is greater than or equal to a preset displacement threshold, determining that the change degree of the image content of the image pair is greater than or equal to a degree threshold.
8. The method of claim 7, further comprising:
and if the template region cannot be detected in the next frame of image in the image pair, determining that the displacement of the template region is greater than or equal to a preset displacement threshold value.
9. The method according to claim 1, wherein after determining that the endoscope is in the moving state and before outputting the 2D image based on the degree of change in the image content of the plurality of frames of images, the method further comprises:
if the 3D image is switched to the 2D image for output, acquiring a last frame image before switching, and continuously displaying the last frame image before switching as the current image; and the number of the first and second electrodes,
switching from the 3D image to the 2D image within a duration threshold;
after the 2D image is switched to, ending displaying the current image, and outputting the 2D image;
after determining that the endoscope is in a still state and before outputting a 3D image based on the degree of change in the image content of the plurality of frames of images, the method further comprises:
if the 2D image is switched to the 3D image for output, acquiring a last frame of image before switching, and continuously displaying the last frame of image before switching as the current image; and the number of the first and second electrodes,
switching from the 2D image to the 3D image within a duration threshold;
and after the 3D image is switched, ending displaying the current image, and outputting the 3D image.
10. An endoscopic device comprising an endoscope, a processor, a display and a memory:
the endoscope is used for acquiring images;
the memory for storing a computer program executable by the processor;
the display is used for displaying the image;
the processor is connected with the memory and configured to execute the instructions to implement the method of switching 2D, 3D images based on an endoscopic surgical scene according to any of claims 1-9.
CN202111274843.7A 2021-10-29 2021-10-29 Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment Pending CN114005526A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111274843.7A CN114005526A (en) 2021-10-29 2021-10-29 Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111274843.7A CN114005526A (en) 2021-10-29 2021-10-29 Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment

Publications (1)

Publication Number Publication Date
CN114005526A true CN114005526A (en) 2022-02-01

Family

ID=79925392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111274843.7A Pending CN114005526A (en) 2021-10-29 2021-10-29 Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment

Country Status (1)

Country Link
CN (1) CN114005526A (en)

Similar Documents

Publication Publication Date Title
US20140293007A1 (en) Method and image acquisition system for rendering stereoscopic images from monoscopic images
KR102163327B1 (en) Efficient and interactive bleeding detection in a surgical system
US9635343B2 (en) Stereoscopic endoscopic image processing apparatus
US20160295194A1 (en) Stereoscopic vision system generatng stereoscopic images with a monoscopic endoscope and an external adapter lens and method using the same to generate stereoscopic images
JP2008301968A (en) Endoscopic image processing apparatus
US20190051039A1 (en) Image processing apparatus, image processing method, program, and surgical system
US10609354B2 (en) Medical image processing device, system, method, and program
US10993603B2 (en) Image processing device, image processing method, and endoscope system
CN101637379A (en) Image display apparatus, endoscope system using the same, and image display method
CN109978015B (en) Image processing method and device and endoscope system
US20190045170A1 (en) Medical image processing device, system, method, and program
JP3438937B2 (en) Image processing device
JPH04138127A (en) Mesh image alleviation device for endoscope
KR20150109076A (en) Colnoscopy surgery simulation system
US20080136815A1 (en) Image display controlling apparatus, image display controlling program and image display controlling method
CN114005526A (en) Method for switching 2D (two-dimensional) and 3D (three-dimensional) images based on endoscopic surgery scene and related equipment
KR20120008292A (en) Virtual arthroscope surgery system
CN113744266B (en) Method and device for displaying focus detection frame, electronic equipment and storage medium
US20200085411A1 (en) Method, apparatus and readable storage medium for acquiring an image
US20200261180A1 (en) 27-3systems, methods, and computer-readable media for providing stereoscopic visual perception notifications and/or recommendations during a robotic surgical procedure
WO2023184526A1 (en) System and method of real-time stereoscopic visualization based on monocular camera
CN115299914A (en) Endoscope system, image processing method and device
CN117731214A (en) Image display method for endoscope system, and endoscope system
JP2023011303A (en) Medical image processing apparatus and operating method of the same
CN117608511A (en) Image display method for endoscope system, and endoscope system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination