WO2023024701A1 - Panoramic endoscope and image processing method thereof - Google Patents

Panoramic endoscope and image processing method thereof Download PDF

Info

Publication number
WO2023024701A1
WO2023024701A1 PCT/CN2022/102948 CN2022102948W WO2023024701A1 WO 2023024701 A1 WO2023024701 A1 WO 2023024701A1 CN 2022102948 W CN2022102948 W CN 2022102948W WO 2023024701 A1 WO2023024701 A1 WO 2023024701A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
endoscope
panoramic
polyp
main camera
Prior art date
Application number
PCT/CN2022/102948
Other languages
French (fr)
Chinese (zh)
Inventor
杜武华
徐健玮
佴广金
钱大宏
黄显峰
Original Assignee
山东威高宏瑞医学科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东威高宏瑞医学科技有限公司 filed Critical 山东威高宏瑞医学科技有限公司
Publication of WO2023024701A1 publication Critical patent/WO2023024701A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/005Flexible endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/07Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/31Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes

Definitions

  • the invention belongs to the technical field of medical detection, and in particular relates to a panoramic endoscope and a medical image recognition monitoring system.
  • Endoscopic surgery is the search and screening of the interior of the human body by means of a medical device (endoscope) within hollow organs or cavities of the body for medical purposes.
  • endoscopes are inserted directly into organs and typically use optical cameras at visible or near-visible frequencies, such as infrared beams, to produce images and video frames from inside the organ.
  • endoscopic surgery is used to examine the organ being tested for suspicious localized areas, such as polyps, tumors, or evidence of cancer cells.
  • Colorectal cancer is the second most lethal cancer in China and the third most lethal cancer in the world. Early detection and removal of colorectal polyps is crucial to reduce the mortality of colorectal cancer patients.
  • a colonoscopy procedure is an endoscopic examination of the distal part of the large intestine (large intestine) and small intestine with an optical camera (usually with a CCD or CMOS camera) on a fiber optic or flexible tube passed through the anus.
  • the gold standard for colorectal cancer screening, colonoscopy can reduce the risk of death from colorectal cancer through early detection of tumors and removal of precancerous lesions. It provides visual diagnosis (eg, ulcers, polyps) and provides an opportunity to biopsy or resect lesions suspected of colorectal cancer.
  • the main interest of the general public in colonoscopy is the removal of polyps smaller than 1 millimeter (mm) or smaller. Once the polyps are removed, they can be studied with the help of a microscope to determine if they are precancerous. It takes 15 years or less for polyps to turn into cancer. For every 1.0% increase in the adenoma detection rate (ADR), the risk of colorectal cancer will decrease by 3.0%; on the contrary, the missed diagnosis of adenoma may lead to further development of the tumor and delay the timing of treatment: the error rate of polyp detection is estimated to be 4% -12%, but recent clinical studies show that the false detection rate may be as high as 25%.
  • ADR adenoma detection rate
  • the purpose of the present application is to provide a panoramic endoscope to help doctors find polyps that are not in the field of vision due to intestinal wall folds.
  • the application discloses a panoramic endoscope, comprising:
  • the main camera is arranged directly in front of the tip of the endoscope
  • a plurality of sub-cameras are arranged on the side wall of the tip;
  • the field of view of the main camera and the fields of view of the plurality of secondary cameras together cover a spherical field of view.
  • a main lighting lens arranged directly in front of the tip, and connected to a light source through a light guide, for providing lighting for the main camera;
  • a secondary lighting lens is arranged on the side wall of the tip and connected to the light source through a light guide for providing lighting for the secondary camera.
  • the number of pixels of the primary camera is higher than that of the secondary camera.
  • the viewing angle of the spherical field of view is not less than 330 degrees.
  • the present application also discloses a panoramic endoscope image processing method, including:
  • a polyp alarm is issued, and a prompt message of the camera to which the image of the detected polyp belongs is output.
  • the step of performing polyp detection on the images synchronously collected from the main camera and multiple auxiliary cameras includes: center enhancement, denoising, and color rendering processing to improve image quality and recognition.
  • the alarm method when a polyp is detected, includes at least one of the following: highlighting, framing, flashing or audio prompts on the screen of the external monitor.
  • the images collected by the main camera and the auxiliary camera are shot in a white light mode.
  • the captured images are input into a trained target detection model to detect whether polyps appear in real time;
  • the target detection model includes a first encoder and a detector
  • the first encoder is configured to perform feature extraction on an image obtained in white light mode to obtain a feature map
  • the detector is configured to perform regression on the input feature map to obtain the coordinates of the location of the polyp.
  • polyp detection is performed through a convolutional neural network
  • the extracted target features include at least one of the following: blood vessel color features, glandular tube features, and edge features.
  • auxiliary cameras cooperate with the main camera to form a spherical field of view, and observe the front, side and rear of the lens at the same time, which overcomes the blind area of vision caused by the obstruction of the intestinal wall and reduces the missed diagnosis of polyps;
  • Fig. 1 is a schematic diagram of a panoramic endoscope according to the present application.
  • Fig. 2 is a schematic diagram of a light source of a panoramic endoscope according to an embodiment of the present application
  • Fig. 3 is a working schematic diagram of a panoramic endoscope according to an embodiment of the present application.
  • Fig. 4 is a schematic view of a panoramic endoscope according to an embodiment of the present application.
  • FIG. 5 is a schematic flow diagram of a medical image recognition monitoring system according to an embodiment of the present application.
  • Fig. 6 is a structural schematic diagram of a panoramic endoscope including a structured light path according to an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of a target detection model according to the method for measuring the absolute size of a lesion under a panoramic endoscope according to the present application;
  • Fig. 8 is a schematic structural diagram of a U-NET network based on an encoder-decoder structure according to the absolute size measurement method of a panoramic endoscope lesion according to the present application;
  • Fig. 9 is a schematic diagram of a lesion size calculation model according to the method for measuring the absolute size of a lesion under a panoramic endoscope according to the present application.
  • panoramic endoscope As described herein, “panoramic endoscope”, “endoscope” and “endoscope” can be used interchangeably, and all refer to the panoramic endoscope of the present invention.
  • the spherical field of view mentioned in this application refers to a spherical or approximately spherical field of view formed by taking a point near the main camera and the sub-camera on the tip 10 of the panoramic endoscope as the center of the sphere. Since the spherical shape can be composed of countless spherical sectors, the spherical field of view in this application can also be composed of a number of specified spherical sector fields of view.
  • a first embodiment of the present invention relates to a panoramic endoscope.
  • a panoramic endoscope 1 As shown in FIG. 1 , more specifically, a panoramic endoscope 1 .
  • the panoramic endoscope 1 has a flexible, rigid tip 10, optionally formed from a snake bone.
  • a plurality of cameras are set on the tip 10, specifically, a main camera 110 is included, and the shooting direction of the main camera 110 is towards the advancing direction of the panoramic endoscope 1; and evenly distributed, there are optionally 2-6 sub-cameras, preferably 3 (as shown in FIG. 1 ): a first sub-camera 120 , a second sub-camera 130 , and a third sub-camera 140 .
  • the field of view of the main camera and the fields of view of the plurality of secondary cameras together cover a spherical field of view.
  • the field of view is a spherical sector with a vertex angle of 140 degrees (the main camera can be regarded as the center of the sphere, and the vertex is located at the center of the sphere) when there are three sub-cameras, the field of view of each sub-camera (for example, the first sub-camera 120) covers a spherical sector with an apex angle of 120 degrees, so that the combined field of view of the main camera and the sub-camera can cover a field of view not less than 330 degrees. degrees of spherical field of view.
  • the main camera 110 is surrounded by a main lighting lens 111, and one or more secondary lighting lenses are provided around each secondary camera to provide side lighting, such as the first secondary lighting lens 121, the second secondary lighting lens 131, the first secondary lighting lens 131, and the like in Figure 2.
  • Three pairs of lighting lenses 141 are respectively connected to the leading light beam interface and the secondary light guiding interface of the light box through the main light beam and the secondary light guiding light interface.
  • the light source in the light box can be one of xenon lamp, laser or LED, and can adjust the light intensity and switch the lighting spectrum.
  • the leading light beam interface can provide stronger light intensity and NBI or multi-spectral switching, while the light intensity of the secondary light beam is weaker, only providing auxiliary lighting, and does not need the function of spectrum switching.
  • the slave guide beam is generated by splitting the master beam.
  • the light splitting device may be a light splitter. According to the corresponding proportion of optical power, the optical splitter distributes the light of the leading beam to the link of the secondary guiding beam.
  • the quality of the main camera and the secondary camera are different.
  • the main camera 110 is a high-definition camera, optionally 2 million pixels, with a resolution of 1920*1080; the three secondary cameras are standard-definition cameras, smaller in size, optionally 10-2 million pixels, preferably 160,000 pixels, The resolution is 480*320.
  • the resolution of the main camera is usually higher than that of the secondary camera.
  • the apex 10 when in use, is inserted into the image acquisition device group and electrically connected to an external processor for real-time acquisition of multiple images with a spherical viewing angle not less than 330°, and transmitted to the external image processor .
  • Another embodiment of the present invention relates to a panoramic endoscope image processing method.
  • the panoramic endoscope of the present invention is used to acquire images during a colon exam and to identify the location of different types of suspicious tissue, in this case polyps.
  • Polyps can be: large/medium polyps, small polyps and most importantly flat polyps to detect.
  • the system can identify other suspicious tissue anywhere in or on the sides of colonic folds. Perform online detection of polyps or other suspicious tissue during actual colonoscopy surgery, sigmoidoscopy, etc.
  • multiple images are acquired from at least one of the following positions of the patient's body: abdominal cavity, esophagus, stomach, nasal cavity, trachea, bronchi, uterine cavity, and vagina.
  • Step 501 of the method synchronously collect images using a main camera and multiple secondary cameras of the panoramic endoscope.
  • Step 502 Perform polyp detection on the images synchronously collected from the main camera and multiple auxiliary cameras.
  • the images captured synchronously by the main camera and multiple sub-cameras need to undergo one or more of the following preprocessing: histogram improvement algorithm; adaptive enhancement of contrast, brightness and image frame according to predefined standards Color normalization; super-resolution improvement of image frames; unbalanced scaling of luma and color channels in image frames to achieve color frequency dominance, where each color channel is individually equalized and controlled to remove noise enhancement; applied signal-to-noise measurements and reduce noise or filter frames accordingly; verify that the image is in focus and filter out-of-focus frames; or any combination thereof.
  • Step 503 If a polyp is detected, a polyp alarm is issued, and a prompt message of the camera to which the image of the detected polyp belongs is output.
  • a polyp is detected in an image captured by one of the secondary cameras (for example, the first secondary camera 120)
  • the image of the first secondary camera 120 is output to the medical operator through an output device (for example, a display, etc.).
  • Prompt information when a polyp is detected, the alarming method includes at least one of the following: highlighting, framing, flashing or audio prompting on the screen of the external monitor.
  • the operator can control the area where the main camera collects the alarm prompts to obtain images with higher resolution for observation.
  • the convolutional neural network model includes three models trained according to the qualified picture library, part library and part feature library, which are respectively used for whether the endoscopic image is qualified, part judgment, and part feature recognition and determination of the extent of the cancer.
  • the model is Resnet50, developed in Python language, packaged into a RESTful API (RESTful network interface) and called by other modules.
  • the panoramic endoscope shoots in white light mode, and the captured images are input into a trained target detection model to detect the presence of polyps in real time.
  • the polyp area is automatically segmented in real time using a deep neural network based on an encoder-decoder structure.
  • the panoramic endoscope also has a structured light channel 300 , that is, another channel independent of the common illumination light channel, which can be used to measure the size of the lesion under the endoscope.
  • the structured light channel 300 includes a structured light generator 34 , a coupler 33 , an optical fiber 32 and a structured light projector 31 .
  • the structured light generator 34 generates visible light with a wavelength between 400-700nm or the passband width of the generated light is less than 5% of the central wavelength value. The actual selection will be determined through experiments to minimize reflection and obtain the clearest structured light image .
  • the coupler 33 is configured to realize coupling between the light source 201 and the optical fiber 32; the structured light generator 34 is configured to cut in and out between the coupler 33 and the light source 201; one end of the optical fiber 32 is connected to the coupler 33, and the other end is connected to To the structured light projector 31; the main illumination lens 111 and the structured light projector 31 are arranged at the end of the endoscope.
  • the structured light After passing through the coupler 33 and the structured light fiber 32 , the structured light is projected onto the tissue surface by the structured light projector 31 . After the structured light is projected onto the surface of the object to be measured, it is modulated by the height of the object to be measured.
  • the modulated structured light is collected by the camera system and sent to the computer for analysis and calculation to obtain the three-dimensional data of the object to be measured.
  • the illumination of the panoramic endoscope has the following two modes: structured light mode and white light mode;
  • the illumination path of the illumination lens is blocked, and the structured light generator 34 cuts in between the coupler 33 and the light source 201 to generate structured light;
  • both the structured light channel 300 and the channel of the illumination lens provide general illumination light.
  • the absolute size measurement system of the lesion under the endoscope was switched to the structured light mode for structured light imaging.
  • the image processing system was used to analyze the collected structured light images, and the polyp size was calculated in combination with the results of polyp segmentation.
  • the target detection model includes a first encoder and a detector; the first encoder is configured to perform feature extraction on an image obtained in white light mode to obtain a feature map; the extracted target features include the following At least one: blood vessel color feature, glandular tube feature, edge feature.
  • the detector is configured to regress on the input feature map to obtain the coordinates of where the polyps are located.
  • the deep neural network based on encoder-decoder structure includes:
  • a second encoder for input images including convolutional layers, activation layers, pooling layers;
  • Decoder for output results including convolution layer, activation layer, concatenation layer, upsampling layer;
  • the feature maps whose size difference is less than a predetermined threshold are fused together by cross-layer feature fusion between the second encoder and decoder.
  • the image x under white light is used as input, which is input into an encoder for feature extraction, and the feature map obtained after feature extraction is obtained by regression of a detector The y-coordinate of where the polyp is located.
  • the encoder and detector are composed of deep neural networks, including convolutional layers, downsampling layers, upsampling layers, pooling layers, batch normalization layers, activation layers, etc.
  • the fine segmentation of the polyp region is based on a deep neural network with an encoder-decoder structure.
  • the left half is the encoder, including convolutional layers, activation layers, and pooling layers.
  • the right half is the decoder, including convolutional layers, activation layers, concatenation layers, and upsampling layers.
  • the feature maps with similar sizes are fused together directly between the encoder and the decoder through cross-layer feature fusion, which can better extract context information and global information.
  • Figure 8 is a U-NET network based on encoder-decoder structure.
  • the system automatically switches or manually switches to the structured light imaging mode, and at the same time, the rear end of the lighting channel is closed by the blocking lens to prevent the passage of the lighting source.
  • the structured light is projected onto the tissue surface through the above-mentioned structured light channel, and the deformation of the structured light image will be collected by the endoscope camera and sent to the image processing center for processing. After the collected structured light image is sent to the image processing center, the image processing center combines the segmentation result and the deformation of the structured light image to calculate the three-dimensional contour of the lesion through the phase shift method and give the lesion size.
  • a lesion size calculation model P is the optical center of the projector, and C is the optical center of the camera. O is the intersection of the optical axis of the camera and the optical axis of the projector. Let the horizontal plane passing through point O be the reference X axis in the calculation. L1 and L2 are the distances from the optical center of the camera and the optical center of the projector to the X-axis, respectively. d is the distance from the optical center of the camera to the optical center of the projector along the X-axis direction.
  • the A-B-O plane is a hypothetical virtual plane, and the hypothetical virtual plane is parallel to the connecting line PC between the optical center of the projector and the optical center of the camera. For a certain point Q on the surface of the object, the height Z of point Q relative to the reference plane can be obtained from the triangular relationship:
  • the projector pattern mapping also has a fixed period on the virtual plane ABO, and AB and into a linear relationship.
  • A(x, y) is the background light intensity
  • B(x, y)/A(x, y) represents the contrast of the grating stripes
  • the phase value is 2 ⁇ /N, that is, the following equations:
  • the absolute coordinates (X, Y, Z) of point Q in the real world can be calculated according to Z, and the absolute coordinates (X, Y, Z) of each point on the collected structured light image can be calculated by calculating ), combined with the results of the above polyp segmentation, the three-dimensional size of the polyp can be obtained.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

A panoramic endoscope (1), comprising a main camera (110) disposed directly in front of a distal end (10) of the endoscope; and a plurality of secondary cameras disposed on a side wall of the distal end (10), wherein the field of view of the main camera (110) and the field of view of the plurality of secondary cameras jointly cover a spherical field of view. The panoramic endoscope (1) uses a plurality of secondary cameras to cooperate with the main camera (110) to form a spherical field of view to observe positions directly in front of, beside and behind the lens simultaneously, thereby overcoming the blind area of vision caused by occlusion of the intestinal wall, and reducing missed diagnosis of polyps. The polyps appearing beside and behind the lens can be collected by the secondary cameras in time without the need for an operator to manipulate a snake bone to make the endoscopic lens bend backward and observe, thereby simplifying endoscope insertion and withdrawal operations.

Description

全景内窥镜及其图像处理方法Panoramic endoscope and its image processing method 技术领域technical field
本发明属于医疗检测技术领域,特别涉及一种全景内窥镜,和一种医疗图像识别监测系统。The invention belongs to the technical field of medical detection, and in particular relates to a panoramic endoscope and a medical image recognition monitoring system.
背景技术Background technique
内窥镜手术是指为了医疗目的,通过医疗装置(内窥镜)在人体的中空器官或空腔内在人体内部搜索和筛选。与大多数其他医学成像装置不同,内窥镜直接插入器官中,并且通常使用可见光频率或近可见频率(诸如红外光束)的光学摄像头,以便从器官内部产生图像和视频帧。通常,内窥镜手术来检查被测器官中是否存在可疑的局部区域,诸如息肉、肿瘤或癌细胞的证据。Endoscopic surgery is the search and screening of the interior of the human body by means of a medical device (endoscope) within hollow organs or cavities of the body for medical purposes. Unlike most other medical imaging devices, endoscopes are inserted directly into organs and typically use optical cameras at visible or near-visible frequencies, such as infrared beams, to produce images and video frames from inside the organ. Typically, endoscopic surgery is used to examine the organ being tested for suspicious localized areas, such as polyps, tumors, or evidence of cancer cells.
结直肠癌是中国第二高致死率的癌症,是全世界第三高致死率的癌症。早期发现和切除结直肠息肉对降低结直肠癌患者的死亡率至关重要。结肠镜检查手术是在通过肛门的光纤或柔性管上用光学摄像头(通常具有CCD或CMOS摄像头)对大肠(大肠)和小肠的远端部分进行内窥镜检查。作为结直肠癌筛查的金标准,结肠镜检查可以通过早期发现肿瘤和切除癌前病变来降低因结直肠癌死亡的风险。它提供了视觉诊断(例如溃疡、息肉)并提供了活检或切除疑似结肠直肠癌病变的机会。Colorectal cancer is the second most lethal cancer in China and the third most lethal cancer in the world. Early detection and removal of colorectal polyps is crucial to reduce the mortality of colorectal cancer patients. A colonoscopy procedure is an endoscopic examination of the distal part of the large intestine (large intestine) and small intestine with an optical camera (usually with a CCD or CMOS camera) on a fiber optic or flexible tube passed through the anus. The gold standard for colorectal cancer screening, colonoscopy can reduce the risk of death from colorectal cancer through early detection of tumors and removal of precancerous lesions. It provides visual diagnosis (eg, ulcers, polyps) and provides an opportunity to biopsy or resect lesions suspected of colorectal cancer.
大众对结肠镜检查的主要兴趣是移除小于1毫米(mm)或更小的息肉。一旦息肉被移除,他们可以借助显微镜研究以确定它们是否是癌前期的。息肉需要15年或更少的时间才能变成癌症。腺瘤检出率(ADR)每增加1.0%,结直肠癌的风险将会降低3.0%;相反,腺瘤的漏诊可能导致肿瘤的进一步 发展,延误治疗时机:息肉检查的错误率估计为4%-12%,但是最近的临床显示,误检率可能高达25%。大肠癌的误检可能导致后续的转移性结肠癌,存活率低于10%。腺瘤检出率的高低很大程度上取决于肠道准备的质量,以及医生的经验,故存在较强主观性。在进行内镜检查时面临两个问题:(1)息肉出现在了内镜的视野范围内,但医生没有发现;(2)一个息肉从未出现在内镜的视野范围中。The main interest of the general public in colonoscopy is the removal of polyps smaller than 1 millimeter (mm) or smaller. Once the polyps are removed, they can be studied with the help of a microscope to determine if they are precancerous. It takes 15 years or less for polyps to turn into cancer. For every 1.0% increase in the adenoma detection rate (ADR), the risk of colorectal cancer will decrease by 3.0%; on the contrary, the missed diagnosis of adenoma may lead to further development of the tumor and delay the timing of treatment: the error rate of polyp detection is estimated to be 4% -12%, but recent clinical studies show that the false detection rate may be as high as 25%. False detection of colorectal cancer can lead to subsequent metastatic colon cancer, with a survival rate of less than 10%. The detection rate of adenoma largely depends on the quality of bowel preparation and the experience of doctors, so there is strong subjectivity. Two problems were faced during the endoscopy: (1) a polyp appeared in the field of view of the endoscope, but the doctor did not find it; (2) a polyp never appeared in the field of view of the endoscope.
自从引入深度神经网络的概念以来,许多与人工智能辅助内镜分析相关的研究已经被报道。但是,这些主要集中于辅助息肉检测任务,只与问题(1)有关。目前的计算机辅助息肉检测系统仅仅能帮助医生发现那些出现在内镜视野内的息肉,但对那些因为医生的操作经验问题,以及肠道环境较为复杂而导致的检查盲区,使得某些息肉从未在内镜视野内出现而导致的漏诊,现在的计算机辅助检测系统还无能为力。因此如何通过新的硬件设计和计算机辅助系统实现内镜检查的盲区质控,是我们需要考虑的问题。Since the introduction of the concept of deep neural networks, many studies related to AI-assisted endoscopic analysis have been reported. However, these mainly focus on the auxiliary polyp detection task and are only relevant to problem (1). The current computer-aided polyp detection system can only help doctors find those polyps that appear in the endoscopic field of view, but for those blind spots caused by the doctor's operating experience and the complex intestinal environment, some polyps have never been detected. The current computer-aided detection system cannot do anything about the missed diagnosis caused by the appearance in the endoscopic field of view. Therefore, how to realize the quality control of the blind area of endoscopic examination through new hardware design and computer-aided system is a problem that we need to consider.
发明内容Contents of the invention
本申请的目的一方面,在于提供一种全景内窥镜,帮助医生及时发现因肠壁褶皱遮挡未在视野里出现的息肉。On the one hand, the purpose of the present application is to provide a panoramic endoscope to help doctors find polyps that are not in the field of vision due to intestinal wall folds.
本申请公开了一种全景内窥镜,包括:The application discloses a panoramic endoscope, comprising:
主摄像头,设置在内窥镜的先端的正前方;The main camera is arranged directly in front of the tip of the endoscope;
多个副摄像头,设置在所述先端的侧壁上;A plurality of sub-cameras are arranged on the side wall of the tip;
其中,所述主摄像头的视野和所述多个副摄像头的视野共同覆盖一个球型视野。Wherein, the field of view of the main camera and the fields of view of the plurality of secondary cameras together cover a spherical field of view.
在一个优选例中,还包括,In a preferred example, it also includes,
主照明透镜,设置在所述先端的正前方,并通过导光束连至光源,用于为所述主摄像头提供照明;和a main lighting lens, arranged directly in front of the tip, and connected to a light source through a light guide, for providing lighting for the main camera; and
副照明透镜,设置在所述先端的侧壁上,并通过导光束连至所述光源,用于为所述副摄像头提供照明。A secondary lighting lens is arranged on the side wall of the tip and connected to the light source through a light guide for providing lighting for the secondary camera.
在一个优选例中,所述主摄像头像素量高于所述副摄像头。In a preferred example, the number of pixels of the primary camera is higher than that of the secondary camera.
在一个优选例中,所述球型视野的视野角度不小于330度。In a preferred example, the viewing angle of the spherical field of view is not less than 330 degrees.
本申请还公开了一种全景内窥镜图像处理方法,包括:The present application also discloses a panoramic endoscope image processing method, including:
从如前文描述的全景内窥镜的主摄像头和多个副摄像头同步采集图像;Collect images synchronously from the main camera and multiple sub-cameras of the panoramic endoscope as described above;
对从所述主摄像头和多个副摄像头同步采集的图像分别进行息肉检测;Carrying out polyp detection on images synchronously collected from the main camera and a plurality of sub-cameras;
如果检测到息肉,则发出息肉报警,并输出检测到息肉的图像所属的摄像头的提示信息。If a polyp is detected, a polyp alarm is issued, and a prompt message of the camera to which the image of the detected polyp belongs is output.
在一个优选例中,对从所述主摄像头和多个副摄像头同步采集的图像进行息肉检测的步骤包括:中心增强、去噪、色彩渲染处理,以提高图像质量和识别度。In a preferred example, the step of performing polyp detection on the images synchronously collected from the main camera and multiple auxiliary cameras includes: center enhancement, denoising, and color rendering processing to improve image quality and recognition.
在一个优选例中,当检测到息肉时,报警方式包括以下至少一种:在外部监护仪屏幕上高亮显示、加框圈注、闪烁显示或音频提示。In a preferred example, when a polyp is detected, the alarm method includes at least one of the following: highlighting, framing, flashing or audio prompts on the screen of the external monitor.
在一个优选例中,所述主摄像头和副摄像头采集图像是在白光模式进行拍摄的。In a preferred example, the images collected by the main camera and the auxiliary camera are shot in a white light mode.
在一个优选例中,将拍摄图像输入经过训练的目标检测模型,以实时检测息肉是否出现;In a preferred example, the captured images are input into a trained target detection model to detect whether polyps appear in real time;
目标检测模型包括第一编码器和检测器;The target detection model includes a first encoder and a detector;
所述第一编码器被配置为对白光模式下获得的图像进行特征提取,得到特征图;The first encoder is configured to perform feature extraction on an image obtained in white light mode to obtain a feature map;
所述检测器被配置为对输入的所述特征图进行回归,得到息肉所在位置的坐标。The detector is configured to perform regression on the input feature map to obtain the coordinates of the location of the polyp.
在一个优选例中,通过卷积神经网络进行息肉检测,提取的目标特征包括以下至少一种:血管颜色特征、腺管特征、边缘特征。In a preferred example, polyp detection is performed through a convolutional neural network, and the extracted target features include at least one of the following: blood vessel color features, glandular tube features, and edge features.
本申请的全景内窥镜、医疗图像识别监测系统至少具有以下技术效果:The panoramic endoscope and medical image recognition monitoring system of the present application have at least the following technical effects:
1、多个副摄像头与主摄像头配合形成球面视野,同时观测镜头正前方、侧方及后方,克服了由于肠壁遮挡而导致的视野盲区,减少息肉漏诊;1. Multiple auxiliary cameras cooperate with the main camera to form a spherical field of view, and observe the front, side and rear of the lens at the same time, which overcomes the blind area of vision caused by the obstruction of the intestinal wall and reduces the missed diagnosis of polyps;
2、对于出现在镜头侧后方的息肉,可被副摄像头及时采集,而无须操作者操纵蛇骨使得内镜镜头向后方弯曲并观察,简化了进镜或退镜的操作。2. For the polyps that appear on the side and rear of the lens, they can be captured by the auxiliary camera in time, without the need for the operator to manipulate the snake bone to make the endoscopic lens bend backward and observe, which simplifies the operation of entering or withdrawing the lens.
本申请的说明书中记载了大量的技术特征,分布在各个技术方案中,如果要罗列出本申请所有可能的技术特征的组合(即技术方案)的话,会使得说明书过于冗长。为了避免这个问题,本申请上述发明内容中公开的各个技术特征、在下文各个实施方式和例子中公开的各技术特征、以及附图中公开的各个技术特征,都可以自由地互相组合,从而构成各种新的技术方案(这些技术方案均应该视为在本说明书中已经记载),除非这种技术特征的组合在技术上是不可行的。例如,在一个例子中公开了特征A+B+C,在另一个例子中公开了特征A+B+D+E,而特征C和D是起到相同作用的等同技术手段,技术上只要择一使用即可,不可能同时采用,特征E 技术上可以与特征C相组合,则,A+B+C+D的方案因技术不可行而应当不被视为已经记载,而A+B+C+E的方案应当视为已经被记载。A large number of technical features are recorded in the description of the application, which are distributed in various technical solutions. If all possible combinations of technical features (ie technical solutions) of the application are to be listed, the description will be too lengthy. In order to avoid this problem, the technical features disclosed in the summary of the invention above, the technical features disclosed in the following embodiments and examples, and the technical features disclosed in the drawings can be freely combined with each other to form a Various new technical solutions (these technical solutions should be deemed to have been recorded in this specification), unless the combination of such technical features is technically infeasible. For example, feature A+B+C is disclosed in one example, and feature A+B+D+E is disclosed in another example, and features C and D are equivalent technical means that play the same role. It can be used as soon as it is used, and it is impossible to use it at the same time. Feature E can be combined with feature C technically. Then, the solution of A+B+C+D should not be regarded as recorded because it is technically infeasible, and A+B+ The C+E scheme should be considered as documented.
附图说明Description of drawings
图1是根据本申请的全景内窥镜示意图;Fig. 1 is a schematic diagram of a panoramic endoscope according to the present application;
图2是根据本申请的一个实施例的全景内窥镜的光源示意图;Fig. 2 is a schematic diagram of a light source of a panoramic endoscope according to an embodiment of the present application;
图3是根据本申请的一个实施例的全景内窥镜的工作示意图;Fig. 3 is a working schematic diagram of a panoramic endoscope according to an embodiment of the present application;
图4是根据本申请的一个实施例的全景内窥镜的视角示意图;Fig. 4 is a schematic view of a panoramic endoscope according to an embodiment of the present application;
图5是根据本申请的一个实施例的医疗图像识别监测系统的流程示意图;FIG. 5 is a schematic flow diagram of a medical image recognition monitoring system according to an embodiment of the present application;
图6是根据本申请的的一个实施例的包括结构光通路的全景内窥镜的结构示意图;Fig. 6 is a structural schematic diagram of a panoramic endoscope including a structured light path according to an embodiment of the present application;
图7是根据本申请的全景内窥镜下病灶绝对尺寸测量方法的目标检测模型的结构示意图;7 is a schematic structural diagram of a target detection model according to the method for measuring the absolute size of a lesion under a panoramic endoscope according to the present application;
图8是根据本申请的全景内窥镜下病灶绝对尺寸测量方法的基于编码器-解码器结构的U-NET网络的结构示意图;Fig. 8 is a schematic structural diagram of a U-NET network based on an encoder-decoder structure according to the absolute size measurement method of a panoramic endoscope lesion according to the present application;
图9是根据本申请的全景内窥镜下病灶绝对尺寸测量方法的病灶尺寸计算模型的示意图。Fig. 9 is a schematic diagram of a lesion size calculation model according to the method for measuring the absolute size of a lesion under a panoramic endoscope according to the present application.
附图标记说明:Explanation of reference signs:
1-全景内窥镜1- Panoramic endoscope
10-先端10 - Apex
110-主摄像头110-main camera
111-主照明透镜111-Main lighting lens
112-主导光束112-dominant beam
1121-主导光束接口1121-leading beam interface
120-第一副摄像头120-The first camera
121-第一副照明透镜121-the first lighting lens
122-从导光束122-Slave light guide
1221-从导光束接口1221-Slave light guide interface
130-第二副摄像头130-Second camera
131-第二副照明透镜131-Second lighting lens
140-第三副摄像头140-The third camera
141-第三副照明透镜141-The third lighting lens
2-灯箱2- light box
201-光源201-light source
300-结构光通路300-structured light path
31-结构光投影器31 - Structured light projector
32-光纤32-fiber
33-耦合器33-coupler
34-结构光发生器34-Structured light generator
具体实施方式Detailed ways
在以下的叙述中,为了使读者更好地理解本申请而提出了许多技术细节。但是,本领域的普通技术人员可以理解,即使没有这些技术细节和基于以下各实施方式的种种变化和修改,也可以实现本申请所要求保护的技术方案。In the following description, many technical details are proposed in order to enable readers to better understand the application. However, those skilled in the art can understand that the technical solutions claimed in this application can be realized even without these technical details and various changes and modifications based on the following implementation modes.
术语the term
如本文所述,“全景内窥镜”、“内窥镜”、“内镜”可互换使用,皆指代本发明的全景内窥镜。As described herein, "panoramic endoscope", "endoscope" and "endoscope" can be used interchangeably, and all refer to the panoramic endoscope of the present invention.
球形视野:本申请中所述的球形视野,是指以全景内窥镜的先端10上,主摄像头、副摄像头附近的一点为球心,所形成的球面型、或近似球面型的视野。由于球形可由无数个球扇形组成,本申请的球形视野也可由指定的若干个球扇形视野拼合而成。Spherical field of view: The spherical field of view mentioned in this application refers to a spherical or approximately spherical field of view formed by taking a point near the main camera and the sub-camera on the tip 10 of the panoramic endoscope as the center of the sphere. Since the spherical shape can be composed of countless spherical sectors, the spherical field of view in this application can also be composed of a number of specified spherical sector fields of view.
本发明的第一实施方式,涉及全景内窥镜。如图1所示,更具体地,是一种全景内窥镜1。在全景内窥镜1具有可弯曲的、硬质的先端10,可选地由蛇骨构成。先端10上设置多个摄像头,具体地,包括一个主摄像头110,主摄像头110的拍摄方向朝向全景内窥镜1的前进方向;多个副摄像头,设置在全景内窥镜1的端部侧面周壁上并且均匀分布,副摄像头可选地为2-6个,优选地为3个(如图1所示):第一副摄像头120、第二副摄像头130、第三副摄像头140。A first embodiment of the present invention relates to a panoramic endoscope. As shown in FIG. 1 , more specifically, a panoramic endoscope 1 . The panoramic endoscope 1 has a flexible, rigid tip 10, optionally formed from a snake bone. A plurality of cameras are set on the tip 10, specifically, a main camera 110 is included, and the shooting direction of the main camera 110 is towards the advancing direction of the panoramic endoscope 1; and evenly distributed, there are optionally 2-6 sub-cameras, preferably 3 (as shown in FIG. 1 ): a first sub-camera 120 , a second sub-camera 130 , and a third sub-camera 140 .
如图4所示,主摄像头的视野和所述多个副摄像头的视野共同覆盖一个球型视野。参见图4的左半部分,在一个实施例中,当仅有主摄像头存在时,视野为一个顶角为140度的球扇形(主摄像头可视作球心,该顶角位于球心)当存在三个副摄像头时,每一个副摄像头(例如,第一副摄像 头120)的视野分别覆盖一个顶角为120度的球扇形,由此主摄像头和副摄像头的视野组合可覆盖一个不小于330度的球形视野。As shown in FIG. 4 , the field of view of the main camera and the fields of view of the plurality of secondary cameras together cover a spherical field of view. Referring to the left half of Figure 4, in one embodiment, when only the main camera exists, the field of view is a spherical sector with a vertex angle of 140 degrees (the main camera can be regarded as the center of the sphere, and the vertex is located at the center of the sphere) when When there are three sub-cameras, the field of view of each sub-camera (for example, the first sub-camera 120) covers a spherical sector with an apex angle of 120 degrees, so that the combined field of view of the main camera and the sub-camera can cover a field of view not less than 330 degrees. degrees of spherical field of view.
主摄像头110周为设置有主照明透镜111,每个副摄像头周围设置有一个或多个副照明透镜以提供侧面照明,如图2的第一副照明透镜121、第二副照明透镜131、第三副照明透镜141。其中主照明透镜和副照明透镜分别通过主导光束和从导光束连至灯箱的主导光束接口和从导光束接口。可选地,灯箱中的光源可以是氙灯、激光或者LED中的一种,并可进行光强调节、切换照明光谱。主导光束接口可以提供较强的光强以及进行NBI或多光谱的切换,从导光束光强较弱,只提供辅助照明,并且无需光谱切换的功能。The main camera 110 is surrounded by a main lighting lens 111, and one or more secondary lighting lenses are provided around each secondary camera to provide side lighting, such as the first secondary lighting lens 121, the second secondary lighting lens 131, the first secondary lighting lens 131, and the like in Figure 2. Three pairs of lighting lenses 141 . Wherein the main lighting lens and the secondary lighting lens are respectively connected to the leading light beam interface and the secondary light guiding interface of the light box through the main light beam and the secondary light guiding light interface. Optionally, the light source in the light box can be one of xenon lamp, laser or LED, and can adjust the light intensity and switch the lighting spectrum. The leading light beam interface can provide stronger light intensity and NBI or multi-spectral switching, while the light intensity of the secondary light beam is weaker, only providing auxiliary lighting, and does not need the function of spectrum switching.
在一个实施例中,从导光束是由主导光束分光产生的。分光装置可以是分光器。按照光功率相对应的比例,分光器将主导光束的光分配到从导光束的链路。In one embodiment, the slave guide beam is generated by splitting the master beam. The light splitting device may be a light splitter. According to the corresponding proportion of optical power, the optical splitter distributes the light of the leading beam to the link of the secondary guiding beam.
主摄像头和副摄像头的画质不同。主摄像头110为高清摄像头,可选地为200万像素,分辨率为1920*1080;三个副摄像头为标清摄像头,体积较小,可选地为10-200万像素,优选为16万像素,分辨率为480*320。随着技术的发展,摄像头的体积会越来越小,成本会越来越低,主摄像头和副摄像头的分辨率可以进一步地提升。不过通常主摄像头的分辨率要高于副摄像头。The quality of the main camera and the secondary camera are different. The main camera 110 is a high-definition camera, optionally 2 million pixels, with a resolution of 1920*1080; the three secondary cameras are standard-definition cameras, smaller in size, optionally 10-2 million pixels, preferably 160,000 pixels, The resolution is 480*320. With the development of technology, the size of the camera will become smaller and lower, and the cost will be lower and lower, and the resolution of the main camera and the secondary camera can be further improved. However, the resolution of the main camera is usually higher than that of the secondary camera.
如图3所示,使用时,将先端10伸入所述图像采集装置组与外部处理器电连接,用于实时获取不小于330°的球面视角的多幅图像,并传输至外部图像处理器。As shown in Figure 3, when in use, the apex 10 is inserted into the image acquisition device group and electrically connected to an external processor for real-time acquisition of multiple images with a spherical viewing angle not less than 330°, and transmitted to the external image processor .
本发明的另一实施方式,涉及一种全景内窥镜图像处理方法。Another embodiment of the present invention relates to a panoramic endoscope image processing method.
在本实施例中,使用本发明的全景内窥镜在结肠检查中采集图像,并且识别不同类型的可疑组织的位置,在这种情况下为息肉。息肉可以是:大/中息肉、小息肉,最重要的是检测是扁平息肉。除了息肉之外,系统还可以识别结肠褶皱任何部位或侧面的其他可疑组织。在实际进行结肠镜检查手术、乙状结肠镜检查等时,在线执行息肉或其他可疑组织的检测。在一个实施例中,从患者身体的以下至少一种位置获取多幅图像:腹腔、食道、胃、鼻腔、气管、支气管、宫腔、阴道。In this example, the panoramic endoscope of the present invention is used to acquire images during a colon exam and to identify the location of different types of suspicious tissue, in this case polyps. Polyps can be: large/medium polyps, small polyps and most importantly flat polyps to detect. In addition to polyps, the system can identify other suspicious tissue anywhere in or on the sides of colonic folds. Perform online detection of polyps or other suspicious tissue during actual colonoscopy surgery, sigmoidoscopy, etc. In one embodiment, multiple images are acquired from at least one of the following positions of the patient's body: abdominal cavity, esophagus, stomach, nasal cavity, trachea, bronchi, uterine cavity, and vagina.
该方法的步骤501:使用全景内窥镜的主摄像头和多个副摄像头同步采集图像。Step 501 of the method: synchronously collect images using a main camera and multiple secondary cameras of the panoramic endoscope.
步骤502:对从主摄像头和多个副摄像头同步采集的图像分别进行息肉检测。在一些实施例中,主摄像头和多个副摄像头同步采集的图像还需进行以下预处理中的一个或多个:直方图改进算法;根据预定义的标准自适应增强对比度、亮度和图像帧的颜色标准化;图像帧的超分辨率改进;图像帧中亮度信道和色彩信道的不平衡伸缩以获得色频占优势,其中每个色彩信道被单独均衡并被控制以消除噪声增强;应用信号噪声测量并减少噪声或相应地滤波帧;验证图像处于焦点并滤波未聚焦的帧;或其任何组合。Step 502: Perform polyp detection on the images synchronously collected from the main camera and multiple auxiliary cameras. In some embodiments, the images captured synchronously by the main camera and multiple sub-cameras need to undergo one or more of the following preprocessing: histogram improvement algorithm; adaptive enhancement of contrast, brightness and image frame according to predefined standards Color normalization; super-resolution improvement of image frames; unbalanced scaling of luma and color channels in image frames to achieve color frequency dominance, where each color channel is individually equalized and controlled to remove noise enhancement; applied signal-to-noise measurements and reduce noise or filter frames accordingly; verify that the image is in focus and filter out-of-focus frames; or any combination thereof.
步骤503:如果检测到息肉,则发出息肉报警,并输出检测到息肉的图像所属的摄像头的提示信息。在一个实施例中,副摄像头中的一个(例如,第一副摄像头120)拍摄到的图像被检测到息肉,则通过输出设备(例如,显示器等)向医务操作人员输出第一副摄像头120的提示信息。当检测到息肉时,报警方式包括以下至少一种:在外部监护仪屏幕上高亮显示、加框圈注、闪烁显示或音频提示。操作者可在随后与I/O设备的交互中,控制主摄像头采集报警提示的区域,以获得分辨率更高的图像进行观察。Step 503: If a polyp is detected, a polyp alarm is issued, and a prompt message of the camera to which the image of the detected polyp belongs is output. In one embodiment, if a polyp is detected in an image captured by one of the secondary cameras (for example, the first secondary camera 120), the image of the first secondary camera 120 is output to the medical operator through an output device (for example, a display, etc.). Prompt information. When a polyp is detected, the alarming method includes at least one of the following: highlighting, framing, flashing or audio prompting on the screen of the external monitor. In the subsequent interaction with the I/O device, the operator can control the area where the main camera collects the alarm prompts to obtain images with higher resolution for observation.
可选的,在一个实施例中,卷积神经网络模型包括根据合格图片库、部位库和部位特征库训练好的三个模型,分别用于内窥镜图像是否合格、部位判断、部位特征识别以及癌症范围的确定。模型为Resnet50,采用Python语言开发,封装成RESTful API(REST风格的网络接口)后被其他模块调用。Optionally, in one embodiment, the convolutional neural network model includes three models trained according to the qualified picture library, part library and part feature library, which are respectively used for whether the endoscopic image is qualified, part judgment, and part feature recognition and determination of the extent of the cancer. The model is Resnet50, developed in Python language, packaged into a RESTful API (RESTful network interface) and called by other modules.
全景内窥镜在白光模式进行拍摄,将拍摄图像输入经过训练的目标检测模型,以实时检测息肉是否出现。The panoramic endoscope shoots in white light mode, and the captured images are input into a trained target detection model to detect the presence of polyps in real time.
如果检测到息肉,则使用基于编码器-解码器结构的深度神经网络对息肉区域进行实时自动分割。If a polyp is detected, the polyp area is automatically segmented in real time using a deep neural network based on an encoder-decoder structure.
在一个实施例中,全景内窥镜还具有结构光通路300,即独立于普通的照明光通路以外的另一通路,可用于测量内窥镜下的病灶尺寸。如图6所示,结构光通路300包括结构光发生器34、耦合器33、光纤32和结构光投影器31。In one embodiment, the panoramic endoscope also has a structured light channel 300 , that is, another channel independent of the common illumination light channel, which can be used to measure the size of the lesion under the endoscope. As shown in FIG. 6 , the structured light channel 300 includes a structured light generator 34 , a coupler 33 , an optical fiber 32 and a structured light projector 31 .
结构光发生器34产生波长在400-700nm之间的可见光或者产生的光的通带宽度是中心波长值的5%以下,实际选取会通过实验确定,以尽量减少反射得到最清晰的结构光图像。The structured light generator 34 generates visible light with a wavelength between 400-700nm or the passband width of the generated light is less than 5% of the central wavelength value. The actual selection will be determined through experiments to minimize reflection and obtain the clearest structured light image .
耦合器33被配置为实现光源201和光纤32之间的耦合;结构光发生器34被配置在耦合器33与光源201之间切入切出;光纤32的一端连接到耦合器33,另一端连接到结构光投影器31;主照明透镜111和结构光投影器31被配置在内窥镜的端部。结构光经耦合器33及结构光光纤32后,由结构光投影器31投射至组织表面。结构光投射到待测物表面后被待测物的高度调制,被调制的结构光经摄像系统采集,传送至计算机内分析计算后可得出被测物的三维数据。The coupler 33 is configured to realize coupling between the light source 201 and the optical fiber 32; the structured light generator 34 is configured to cut in and out between the coupler 33 and the light source 201; one end of the optical fiber 32 is connected to the coupler 33, and the other end is connected to To the structured light projector 31; the main illumination lens 111 and the structured light projector 31 are arranged at the end of the endoscope. After passing through the coupler 33 and the structured light fiber 32 , the structured light is projected onto the tissue surface by the structured light projector 31 . After the structured light is projected onto the surface of the object to be measured, it is modulated by the height of the object to be measured. The modulated structured light is collected by the camera system and sent to the computer for analysis and calculation to obtain the three-dimensional data of the object to be measured.
由此,全景内窥镜的照明具有以下两种模式:结构光模式、白光模式;Therefore, the illumination of the panoramic endoscope has the following two modes: structured light mode and white light mode;
当切换到结构光模式,照明透镜的照明通路被遮挡,结构光发生器34切入耦合器33与光源201之间,产生结构光;When switching to the structured light mode, the illumination path of the illumination lens is blocked, and the structured light generator 34 cuts in between the coupler 33 and the light source 201 to generate structured light;
当切换到白光模式时,结构光通路300与照明透镜的通路均提供普通照明光。When switching to the white light mode, both the structured light channel 300 and the channel of the illumination lens provide general illumination light.
此后切换内窥镜下病灶绝对尺寸测量系统至结构光模式,进行结构光成像。Afterwards, the absolute size measurement system of the lesion under the endoscope was switched to the structured light mode for structured light imaging.
此后使用图像处理系统对采集到的结构光图像进行分析,结合息肉分割结果计算息肉尺寸。Afterwards, the image processing system was used to analyze the collected structured light images, and the polyp size was calculated in combination with the results of polyp segmentation.
可选的,在一个实施例中,目标检测模型包括第一编码器和检测器;第一编码器被配置为对白光模式下获得的图像进行特征提取,得到特征图;提取的目标特征包括以下至少一种:血管颜色特征、腺管特征、边缘特征。检测器被配置为对输入的特征图进行回归,得到息肉所在位置的坐标。Optionally, in one embodiment, the target detection model includes a first encoder and a detector; the first encoder is configured to perform feature extraction on an image obtained in white light mode to obtain a feature map; the extracted target features include the following At least one: blood vessel color feature, glandular tube feature, edge feature. The detector is configured to regress on the input feature map to obtain the coordinates of where the polyps are located.
可选的,在一个实施例中,基于编码器-解码器结构的深度神经网络包括:Optionally, in one embodiment, the deep neural network based on encoder-decoder structure includes:
用于输入图像的第二编码器,包括卷积层、激活层、池化层;A second encoder for input images, including convolutional layers, activation layers, pooling layers;
用于输出结果的解码器,包括卷积层、激活层、串联层、上采样层;Decoder for output results, including convolution layer, activation layer, concatenation layer, upsampling layer;
第二编码器和解码器之间通过跨层特征融合将尺寸差异小于预定阈值的特征图融合在一起。The feature maps whose size difference is less than a predetermined threshold are fused together by cross-layer feature fusion between the second encoder and decoder.
可选的,在一个实施例中,如图7所示的目标检测模型,白光下的图像x作为输入,输入一个编码器进行特征提取,特征提取后得到的特征图再经由一个检测器回归得到息肉所在位置的坐标y。其中编码器和检测器由深度神经网络构成,包括卷积层、下采样层、上采样层、池化层、批归一化层、激活层等。通过上述目标检测模型,当系统检测到息肉存在时, 系统会在监视器上用红色框提示医生息肉所在的矩形区域,并发出警报声。随后系统对息肉区域进行精细分割。Optionally, in one embodiment, for the target detection model shown in FIG. 7 , the image x under white light is used as input, which is input into an encoder for feature extraction, and the feature map obtained after feature extraction is obtained by regression of a detector The y-coordinate of where the polyp is located. The encoder and detector are composed of deep neural networks, including convolutional layers, downsampling layers, upsampling layers, pooling layers, batch normalization layers, activation layers, etc. Through the above-mentioned target detection model, when the system detects the presence of polyps, the system will prompt the doctor with a red box on the monitor to indicate the rectangular area where the polyps are located, and an alarm will sound. The system then performs fine segmentation of the polyp area.
对息肉区域的精细分割基于编码器-解码器结构的深度神经网络进行。左半部分为编码器,包括卷积层、激活层、池化层。右半部分为解码器,包括卷积层、激活层、串联层、上采样层。编码器和解码器之间直接通过跨层特征融合将具有相近尺寸的特征图融合在一起,可以更好的提取上下文信息与全局信息。如图8所示是一个基于编码器-解码器结构的U-NET网络。The fine segmentation of the polyp region is based on a deep neural network with an encoder-decoder structure. The left half is the encoder, including convolutional layers, activation layers, and pooling layers. The right half is the decoder, including convolutional layers, activation layers, concatenation layers, and upsampling layers. The feature maps with similar sizes are fused together directly between the encoder and the decoder through cross-layer feature fusion, which can better extract context information and global information. As shown in Figure 8 is a U-NET network based on encoder-decoder structure.
一旦病灶被分割出来之后,系统自动切换或手动为结构光成像模式,同时照明通路后端通过遮挡镜片关闭,阻止照明光源的通过。通过上述结构光通路将结构光投影至组织表面,结构光图像的变形将被内窥镜摄像头采集,并传送至图像处理中心处理。采集到的结构光图像传送至图像处理中心后,图像处理中心结合分割结果与结构光图像的变形,通过相移法计算出病灶的三维轮廓,并且给出病灶尺寸。Once the lesion is segmented, the system automatically switches or manually switches to the structured light imaging mode, and at the same time, the rear end of the lighting channel is closed by the blocking lens to prevent the passage of the lighting source. The structured light is projected onto the tissue surface through the above-mentioned structured light channel, and the deformation of the structured light image will be collected by the endoscope camera and sent to the image processing center for processing. After the collected structured light image is sent to the image processing center, the image processing center combines the segmentation result and the deformation of the structured light image to calculate the three-dimensional contour of the lesion through the phase shift method and give the lesion size.
可选的,在一个实施例中,如图9所示,一种病灶尺寸计算模型。P为投影仪光心,C为相机光心。O为相机光轴和投影仪光轴的交点。设通过O点的水平平面为计算中基准X轴。L1和L2分别为相机光心和投影仪光心到X轴的距离。d为相机光心到投影仪光心沿X轴方向的距离。A-B-O平面为假设的虚平面,所假设的虚平面平行于投影仪光心和相机光心的连接线PC。对于物体表面某一点Q,由三角关系可以解得Q点相对于参考平面的高度Z:Optionally, in one embodiment, as shown in FIG. 9 , a lesion size calculation model. P is the optical center of the projector, and C is the optical center of the camera. O is the intersection of the optical axis of the camera and the optical axis of the projector. Let the horizontal plane passing through point O be the reference X axis in the calculation. L1 and L2 are the distances from the optical center of the camera and the optical center of the projector to the X-axis, respectively. d is the distance from the optical center of the camera to the optical center of the projector along the X-axis direction. The A-B-O plane is a hypothetical virtual plane, and the hypothetical virtual plane is parallel to the connecting line PC between the optical center of the projector and the optical center of the camera. For a certain point Q on the surface of the object, the height Z of point Q relative to the reference plane can be obtained from the triangular relationship:
Figure PCTCN2022102948-appb-000001
Figure PCTCN2022102948-appb-000001
由于平面至平面的投影为线性映射,故投影仪图案映射在虚平面ABO上也具有固定的周期,AB与
Figure PCTCN2022102948-appb-000002
成线性关系。
Since the plane-to-plane projection is a linear mapping, the projector pattern mapping also has a fixed period on the virtual plane ABO, and AB and
Figure PCTCN2022102948-appb-000002
into a linear relationship.
再计算Q点的绝对相位
Figure PCTCN2022102948-appb-000003
当将正弦条纹图像投射到三维漫反射表面时,从相机中观测到的某点Q(x,y)的像可表示为:
Then calculate the absolute phase of the Q point
Figure PCTCN2022102948-appb-000003
When the sinusoidal fringe image is projected onto a three-dimensional diffuse surface, the image of a point Q(x,y) observed from the camera can be expressed as:
Figure PCTCN2022102948-appb-000004
Figure PCTCN2022102948-appb-000004
其中A(x,y)是背景光强度,B(x,y)/A(x,y)表示光栅条纹的对比度,
Figure PCTCN2022102948-appb-000005
是相位值。对于N步相移法,每一步正弦光栅相移与上一步正弦光栅相移的投影相位相差2π/N的相位,即以下方程组:
Where A(x, y) is the background light intensity, B(x, y)/A(x, y) represents the contrast of the grating stripes,
Figure PCTCN2022102948-appb-000005
is the phase value. For the N-step phase shift method, the phase difference between each step of sinusoidal grating phase shift and the projection phase of the previous step of sinusoidal grating phase shift is 2π/N, that is, the following equations:
Figure PCTCN2022102948-appb-000006
Figure PCTCN2022102948-appb-000006
其中i=1,2,…,N。方程组共有N个方程,三个未知量,当N>=3时即可求解。解得:where i=1,2,...,N. The system of equations has N equations and three unknown quantities, and it can be solved when N>=3. Solutions have to:
Figure PCTCN2022102948-appb-000007
Figure PCTCN2022102948-appb-000007
Figure PCTCN2022102948-appb-000008
Figure PCTCN2022102948-appb-000008
通过相位展开,可以得到Q点(x,y)的绝对相位值
Figure PCTCN2022102948-appb-000009
Figure PCTCN2022102948-appb-000010
带入上式即可得到Q点距虚平面ABO的高度Z。
Through phase unwrapping, the absolute phase value of point Q (x, y) can be obtained
Figure PCTCN2022102948-appb-000009
Will
Figure PCTCN2022102948-appb-000010
Putting it into the above formula can get the height Z between the point Q and the virtual plane ABO.
通过对相机参数的标定,可以根据Z计算得出Q点在真实世界中的绝对坐标(X,Y,Z),通过计算采集到的结构光图像上每一点的绝对坐标(X,Y,Z),结合上述息肉分割的结果,即可求得息肉的三维尺寸。By calibrating the camera parameters, the absolute coordinates (X, Y, Z) of point Q in the real world can be calculated according to Z, and the absolute coordinates (X, Y, Z) of each point on the collected structured light image can be calculated by calculating ), combined with the results of the above polyp segmentation, the three-dimensional size of the polyp can be obtained.
在本申请提及的所有文献都被认为是整体性地包括在本申请的公开内容中,以便在必要时可以作为修改的依据。此外应理解,在阅读了本申请的上述公开内容之后,本领域技术人员可以对本申请作各种改动或修改,这些等价形式同样落于本申请所要求保护的范围。All documents mentioned in this application are considered to be included in the disclosure content of this application in their entirety so that they can be used as a basis for amendments when necessary. In addition, it should be understood that after reading the above disclosure of the present application, those skilled in the art may make various changes or modifications to the present application, and these equivalent forms also fall within the scope of protection claimed in the present application.

Claims (10)

  1. 一种全景内窥镜,其特征在于,包括:A panoramic endoscope is characterized in that it comprises:
    主摄像头,设置在内窥镜的先端的正前方;The main camera is arranged directly in front of the tip of the endoscope;
    多个副摄像头,设置在所述先端的侧壁上;A plurality of sub-cameras are arranged on the side wall of the tip;
    其中,所述主摄像头的视野和所述多个副摄像头的视野共同覆盖一个球型视野。Wherein, the field of view of the main camera and the fields of view of the plurality of secondary cameras together cover a spherical field of view.
  2. 如权利要求1所述的全景内窥镜,其特征在于,还包括,The panoramic endoscope according to claim 1, further comprising:
    主照明透镜,设置在所述先端的正前方,并通过导光束连至光源,用于为所述主摄像头提供照明;和a main lighting lens, arranged directly in front of the tip, and connected to a light source through a light guide, for providing lighting for the main camera; and
    副照明透镜,设置在所述先端的侧壁上,并通过导光束连至所述光源,用于为所述副摄像头提供照明。A secondary lighting lens is arranged on the side wall of the tip and connected to the light source through a light guide for providing lighting for the secondary camera.
  3. 如权利要求1所述的全景内窥镜,其特征在于,所述主摄像头像素量高于所述副摄像头。The panoramic endoscope according to claim 1, wherein the number of pixels of the main camera is higher than that of the secondary camera.
  4. 如权利要求1所述的全景内窥镜,其特征在于,所述球型视野的视野角度不小于330度。The panoramic endoscope according to claim 1, wherein the viewing angle of the spherical field of view is not less than 330 degrees.
  5. 一种全景内窥镜图像处理方法,其特征在于,包括:A panoramic endoscope image processing method, characterized in that it comprises:
    从如权利要求1-4中任意一项所述的全景内窥镜的主摄像头和多个副摄像头同步采集图像;Collect images synchronously from the main camera and a plurality of secondary cameras of the panoramic endoscope according to any one of claims 1-4;
    对从所述主摄像头和多个副摄像头同步采集的图像分别进行息肉检测;Carrying out polyp detection on images synchronously collected from the main camera and a plurality of sub-cameras;
    如果检测到息肉,则发出息肉报警,并输出检测到息肉的图像所属的摄像头的提示信息。If a polyp is detected, a polyp alarm is issued, and a prompt message of the camera to which the image of the detected polyp belongs is output.
  6. 如权利要求5所述的全景内窥镜图像处理方法,其特征在于,对从所述主摄像头和多个副摄像头同步采集的图像进行息肉检测的步骤包括:中心增强、去噪、色彩渲染处理,以提高图像质量和识别度。The panoramic endoscope image processing method according to claim 5, wherein the step of detecting polyps on images synchronously collected from the main camera and a plurality of secondary cameras comprises: central enhancement, denoising, and color rendering , to improve image quality and recognition.
  7. 如权利要求5所述的全景内窥镜图像处理方法,其特征在于,当检测到息肉时,报警方式包括以下至少一种:在外部监护仪屏幕上高亮显示、加框圈注、闪烁显示或音频提示。The panoramic endoscope image processing method according to claim 5, wherein when a polyp is detected, the alarm mode includes at least one of the following: highlighting on the screen of an external monitor, adding a frame, and flashing or audio prompts.
  8. 如权利要求5所述的全景内窥镜图像处理方法,其特征在于,所述主摄像头和副摄像头采集图像是在白光模式进行拍摄的。The panoramic endoscope image processing method according to claim 5, wherein the images collected by the main camera and the secondary camera are taken in a white light mode.
  9. 如权利要求5所述的全景内窥镜图像处理方法,其特征在于,将拍摄图像输入经过训练的目标检测模型,以实时检测息肉是否出现;The panoramic endoscope image processing method as claimed in claim 5, characterized in that, the captured image is input into a trained target detection model to detect whether polyps occur in real time;
    目标检测模型包括第一编码器和检测器;The target detection model includes a first encoder and a detector;
    所述第一编码器被配置为对白光模式下获得的图像进行特征提取,得到特征图;The first encoder is configured to perform feature extraction on an image obtained in white light mode to obtain a feature map;
    所述检测器被配置为对输入的所述特征图进行回归,得到息肉所在位置的坐标。The detector is configured to perform regression on the input feature map to obtain the coordinates of the location of the polyp.
  10. 如权利要求5所述的全景内窥镜图像处理方法,其特征在于,通过卷积神经网络进行息肉检测,提取的目标特征包括以下至少一种:血管颜色特征、腺管特征、边缘特征。The panoramic endoscope image processing method according to claim 5, wherein the polyp detection is performed through a convolutional neural network, and the extracted target features include at least one of the following: blood vessel color features, glandular tube features, and edge features.
PCT/CN2022/102948 2021-08-23 2022-06-30 Panoramic endoscope and image processing method thereof WO2023024701A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110970203.3 2021-08-23
CN202110970203.3A CN115708658A (en) 2021-08-23 2021-08-23 Panoramic endoscope and image processing method thereof

Publications (1)

Publication Number Publication Date
WO2023024701A1 true WO2023024701A1 (en) 2023-03-02

Family

ID=85230329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/102948 WO2023024701A1 (en) 2021-08-23 2022-06-30 Panoramic endoscope and image processing method thereof

Country Status (2)

Country Link
CN (1) CN115708658A (en)
WO (1) WO2023024701A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935051A (en) * 2023-07-20 2023-10-24 深圳大学 Polyp segmentation network method, system, electronic equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091366B (en) * 2023-04-07 2023-08-22 成都华域天府数字科技有限公司 Multi-dimensional shooting operation video and method for eliminating moire
CN117243700B (en) * 2023-11-20 2024-03-08 北京云力境安科技有限公司 Method and related device for detecting endoscope conveying length

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010279539A (en) * 2009-06-04 2010-12-16 Fujifilm Corp Diagnosis supporting apparatus, method, and program
CN103348470A (en) * 2010-12-09 2013-10-09 恩多巧爱思创新中心有限公司 Flexible electronic circuit board for a multi-camera endoscope
CN104717916A (en) * 2012-10-18 2015-06-17 恩多卓斯创新中心有限公司 Multi-camera endoscope
US20150201827A1 (en) * 2010-10-28 2015-07-23 Endochoice, Inc. Image Capture and Video Processing Systems and Methods for Multiple Viewing Element Endoscopes
CN104856635A (en) * 2015-05-14 2015-08-26 珠海视新医用科技有限公司 Tip end structure of panoramic endoscope
US20170071456A1 (en) * 2015-06-10 2017-03-16 Nitesh Ratnakar Novel 360-degree panoramic view formed for endoscope adapted thereto with multiple cameras, and applications thereof to reduce polyp miss rate and facilitate targeted polyp removal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010279539A (en) * 2009-06-04 2010-12-16 Fujifilm Corp Diagnosis supporting apparatus, method, and program
US20150201827A1 (en) * 2010-10-28 2015-07-23 Endochoice, Inc. Image Capture and Video Processing Systems and Methods for Multiple Viewing Element Endoscopes
CN103348470A (en) * 2010-12-09 2013-10-09 恩多巧爱思创新中心有限公司 Flexible electronic circuit board for a multi-camera endoscope
CN104717916A (en) * 2012-10-18 2015-06-17 恩多卓斯创新中心有限公司 Multi-camera endoscope
CN104856635A (en) * 2015-05-14 2015-08-26 珠海视新医用科技有限公司 Tip end structure of panoramic endoscope
US20170071456A1 (en) * 2015-06-10 2017-03-16 Nitesh Ratnakar Novel 360-degree panoramic view formed for endoscope adapted thereto with multiple cameras, and applications thereof to reduce polyp miss rate and facilitate targeted polyp removal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935051A (en) * 2023-07-20 2023-10-24 深圳大学 Polyp segmentation network method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115708658A (en) 2023-02-24

Similar Documents

Publication Publication Date Title
WO2023024701A1 (en) Panoramic endoscope and image processing method thereof
US7744528B2 (en) Methods and devices for endoscopic imaging
JP5675227B2 (en) Endoscopic image processing apparatus, operation method, and program
JP2012511361A5 (en) Apparatus and image processing unit for improved infrared image processing and functional analysis of blood vessel structures such as blood vessels
Herrera et al. Development of a Multispectral Gastroendoscope to Improve the Detection of Precancerous Lesions in Digestive Gastroendoscopy
JP6967602B2 (en) Inspection support device, endoscope device, operation method of endoscope device, and inspection support program
JP4994737B2 (en) Medical image processing apparatus and medical image processing method
JP2010279539A (en) Diagnosis supporting apparatus, method, and program
WO2012165204A1 (en) Endoscope device
WO2012165203A1 (en) Endoscope device
WO2014119047A1 (en) Image processing device for endoscopes, endoscope device, image processing method, and image processing program
US11423318B2 (en) System and methods for aggregating features in video frames to improve accuracy of AI detection algorithms
KR20140108066A (en) Endoscope system and control method thereof
JP2017534322A (en) Diagnostic mapping method and system for bladder
JP6132901B2 (en) Endoscope device
WO2014168128A1 (en) Endoscope system and operation method for endoscope system
WO2020162275A1 (en) Medical image processing device, endoscope system, and medical image processing method
US20100262000A1 (en) Methods and devices for endoscopic imaging
JPWO2019220859A1 (en) Image processing equipment, endoscopic system, and image processing method
JPWO2014148184A1 (en) Endoscope system
US20150363929A1 (en) Endoscope apparatus, image processing method, and information storage device
JP2023026480A (en) Medical image processing device, endoscope system, and operation method of medical image processing device
CN109068035B (en) Intelligent micro-camera array endoscopic imaging system
JP2022002701A (en) Endoscope observation method, endoscope observation system, and software program product
US20150073210A1 (en) Imaging system, method and distal attachment for multidirectional field of view endoscopy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22860048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE