CN113850299A - Gastrointestinal tract capsule endoscopy video key frame extraction method capable of self-adapting to threshold - Google Patents

Gastrointestinal tract capsule endoscopy video key frame extraction method capable of self-adapting to threshold Download PDF

Info

Publication number
CN113850299A
CN113850299A CN202111021196.9A CN202111021196A CN113850299A CN 113850299 A CN113850299 A CN 113850299A CN 202111021196 A CN202111021196 A CN 202111021196A CN 113850299 A CN113850299 A CN 113850299A
Authority
CN
China
Prior art keywords
image
images
video
frequency
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111021196.9A
Other languages
Chinese (zh)
Other versions
CN113850299B (en
Inventor
李胜
韩建红
向中坡
夏瑞瑞
马悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Ada Technology Co ltd
Original Assignee
Zhejiang Ada Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Ada Technology Co ltd filed Critical Zhejiang Ada Technology Co ltd
Priority to CN202111021196.9A priority Critical patent/CN113850299B/en
Priority claimed from CN202111021196.9A external-priority patent/CN113850299B/en
Publication of CN113850299A publication Critical patent/CN113850299A/en
Application granted granted Critical
Publication of CN113850299B publication Critical patent/CN113850299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Abstract

A gastrointestinal tract endoscopy video key frame extraction method based on self-adaptive threshold values includes judging similarity of adjacent frames through efficient and concise image color features, setting the self-adaptive threshold values according to the similarity of the adjacent frames to remove a large number of redundant images, finding out a lot of motion blurred or defocus blurred images through observing the endoscopy video frames of the capsule, and screening and removing blurred images by using a frequency characteristic method. Meanwhile, a method based on a color histogram is provided for removing a virtual focus image caused by the fact that the lens is too close to the wall of the organ. The method provided by the invention can be used for rapidly and efficiently screening a large number of redundant images, can be used for further removing images (such as fuzzy images, virtual focus images and the like) containing useless information, and can be used for obtaining video key frames with analysis value, thereby having important practical application value.

Description

Gastrointestinal tract capsule endoscopy video key frame extraction method capable of self-adapting to threshold
Technical Field
The invention relates to the technical field of image processing, in particular to a gastrointestinal tract capsule endoscopy video key frame extraction method capable of self-adapting to a threshold value.
Background
Hand-held endoscopes or capsule endoscopes are the primary means of examining gastrointestinal diseases. The capsule endoscope is integrated by luminous LED, camera lens, CMOS image sensor, ASIC transmitter, antenna, battery etc. and encapsulates in the capsule, and after the capsule endoscope got into the human body, it was the signal of telecommunication through image sensor conversion to shoot by the camera lens, and outside the body is sent to the rethread antenna, and the image data of shooing is accepted and stored to the receiver that patient carried with oneself, and the doctor observes the image of following the receiver download through the computer and diagnoses the state of an illness. When the capsule endoscope is put into clinical examination, thousands of images of the digestive tract are taken and stored into a video stream which is often many hours, and doctors need to spend a lot of time to find out suspicious lesion images from the video stream for diagnosis, which is very heavy and time-consuming work for the doctors. By observing video frames in the capsule endoscopy video, the method can find that the adjacent frames have high similarity, namely the original video contains a large amount of redundant information, the key frame extraction of the capsule endoscopy video is realized by removing the redundancy, and the method has important significance for improving the efficiency of video inspection.
At present, the capsule endoscopy video key frame extraction method has the problems of low accuracy and slow processing speed. The low accuracy can cause the low simplification degree of the video and easily increase the misdiagnosis and missed diagnosis probability; as a pretreatment method, the treatment speed is too slow to be practical. The method comprises the steps of judging the similarity of adjacent frames through efficient and concise image color features, and setting a self-adaptive threshold value according to the similarity of the adjacent frames to remove a large number of redundant images. Then, a method for screening and removing blurred images by using frequency characteristics is provided by observing a video frame of the capsule endoscope to find a plurality of images with motion blur or defocus blur (as shown in figure 1). Meanwhile, for a virtual focus image (as shown in fig. 1) caused by the lens being too close to the wall of the organ, a color histogram-based method is proposed for removal. The method provided by the patent not only can rapidly and efficiently screen out a large number of redundant images, but also can further remove images (such as fuzzy images, virtual focus images and the like) containing useless information, so that video key frames with analysis values are obtained, and the method has important practical application values.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a gastrointestinal tract capsule endoscope video key frame extraction method capable of adapting to a threshold value, and the method is used for extracting color information of an image to an HSV space, extracting the color information to perform similarity measurement, adaptively setting the threshold value to remove redundant images, then removing blurred images according to frequency characteristics, and finally removing virtual focus images according to an RGB color histogram to finally obtain a video key frame.
In order to solve the technical problems, the invention adopts the following technical scheme:
a gastrointestinal tract endoscopy video key frame extraction method capable of self-adapting to a threshold comprises the following steps:
step 1: taking a capsule endoscope video as input, and cutting the capsule endoscope video into video frames to obtain a video frame data set;
step 2: taking a video frame data set as input to obtain images with uniform size;
and step 3: for a video frame with a large black frame, the subsequent operation of removing redundant information by color matching is influenced, and the black frame is removed by automatic cutting;
and 4, step 4: similarity measurement is carried out by extracting color characteristics of two adjacent frames of images in a capsule endoscope video, the images acquired by the capsule endoscope and converted are converted to an HSV color space for analysis, HSV is divided into H, S, V three components which respectively represent hue, saturation and brightness, and compared with the RGB color space, HSV concepts are mutually independent and more accord with human visual perception; after HSV color components of the image are obtained, calculating the distance between two adjacent frames through an Euclidean formula, judging that the difference between the two frames is large if the distance is larger than a certain threshold value, carrying out self-adaptive threshold value judgment through the HSV color components, comparing the similarity between the two adjacent frames through the threshold values, retaining the previous frame if the two frames are similar, and removing redundant images by analogy;
and 5: setting a threshold value to distinguish blurred images from clear images by acquiring high-frequency information of each frame of image so as to remove the blurred images;
step 6: acquiring a key frame after the blurred image is removed, finding that a plurality of virtual focus images caused by the fact that the lens is too close to the wall of the organ do not contain any useful information, and setting a frequency change difference threshold value to remove the virtual focus images;
and 7: since the reduction of the image size at the beginning can cause the loss of part of the feature information, in order to enable a clinician to better observe the image, the processed reduced video frame is restored to a video frame with the size equal to the video resolution, and finally the video key frame is output.
Further, in the step 3, three different clipping methods are set, assuming that the black pixel of the image after the size reduction is B, the total pixel is D, and Y is the width of the image, when B < D × α, no clipping operation is performed; when D × α < B < D × β, the clipping border is (1/5) × Y; when B > D × β, a frame (1/3) × Y is cut, where α is 23% and β is 38%.
Still further, in step 4, the threshold of the data set for the first test is set as an initial threshold t (i), H (i) is a maximum value in the H component generated by the first input data set, and H (i +1) is a maximum value in the H component generated by the subsequent input data set, and the following formula is obtained:
Figure BDA0003241446560000031
where T (i +1) represents the adaptive threshold of the following input data set and Δ represents the varying parameter;
and after the threshold value is determined through the HSV component, the similarity between two adjacent frames is compared, if the two frames are similar, the previous frame is reserved, and the like, so that the aim of removing the redundant image is fulfilled.
Furthermore, in step 5, the frequency spectrum of the image reflects the distribution of the image energy in the frequency spectrum, the high-frequency and low-frequency components of the frequency spectrum reflect the distribution of edge information and smooth information in the image, the edge and detail information of the blurred image is weakened by the influence of blur, and the low-frequency information such as a smooth region and the like is weakenedThe information is basically not influenced, and the clear image has rich high-frequency information of edges, textures and details, so that the frequency spectrum energy of the clear image is more distributed than that of the blurred image in a high-frequency area, and the blur and definition degree of the image is judged according to the size of the high-frequency information; fourier transform is carried out on the image to obtain a frequency spectrum, high-frequency components and low-frequency components are separated, each high-frequency component in the image is summed and then normalized, the size of the high-frequency component in each image is compared, and a threshold value T is setdAnd (5) judging the image larger than the threshold value as a sharp image, otherwise, judging the image as a blurred image, and finally obtaining the video frame without the blurred image.
In the step 6, the virtual focus image and the normal image are judged by calculating the frequency variance in the interval (150->3.2e-4And judging the image to be a virtual focus image.
The invention has the beneficial effects that: firstly, setting a self-adaptive threshold value through color similarity between adjacent frames to remove a large number of redundant images, then, providing a method for distinguishing blurred images from clear images according to a frequency characteristic setting threshold value to further remove blurred images, and finally, providing a method for removing virtual focus images based on a color histogram setting threshold value to finally obtain a video key frame. The invention not only helps the clinician to solve the heavy work, improves the diagnosis efficiency and reduces the misdiagnosis rate, but also has important practical application value
Drawings
Fig. 1 is a schematic diagram of a blurred image, a virtual focus image, and an image with a black border, wherein (a) is the blurred image, (b) is the virtual focus image, and (c) is the image with the black border.
Fig. 2 is an overall flow chart of the present invention.
Detailed Description
For the purpose of illustrating the objects and technical solutions of the present invention, the following detailed description of the present invention is provided in conjunction with the accompanying drawings.
Referring to fig. 1 and 2, a method for extracting gastrointestinal tract endoscopy video key frames by adaptive threshold includes the following steps:
step 1: and taking the capsule endoscope video as input, and cutting the capsule endoscope video into video frames to obtain a video frame data set.
Step 2: the resolution of the video frames obtained by different devices is different, so that the subsequent threshold judgment is not facilitated, and meanwhile, in order to accelerate the processing speed, the video frame data set is used as input to obtain images with uniform size.
And step 3: and judging whether a large number of black borders exist in the video frame. For the judgment of the black frame, the method adopts an image threshold processing method. Firstly, reading a video frame, utilizing median filtering to suppress noise in an image, then adopting image thresholding to convert the image into a black-white binary image, and judging the size of a black frame by counting the size of black pixels in the image. The black frames in video frames generated by different devices are different in size, three different cutting methods are set in the method, assuming that the black pixels of the image after size reduction are B, the total pixels are D, Y is the width of the image, and when B is less than D multiplied by alpha, the cutting operation is not carried out; when D × α < B < D × β, the clipping border is (1/5) × Y; when B > D × β, a frame (1/3) × Y is cut, where α is 23% and β is 38%. The video frame with the black frame occupies the resolution ratio and has negative effects on subsequent operations such as color matching and the like, and the black frame is cut off.
And 4, step 4: the image with the black frame removed and the uniform size is used as input, and in order to remove a large number of redundant images, similarity measurement is carried out according to the color characteristics of two adjacent frames of images in a video frame. The image acquired and converted by the capsule endoscope is converted to an HSV color space for analysis, HSV is divided into H, S, V components which respectively represent hue, saturation and brightness, and compared with the RGB color space, the HSV concepts are mutually independent and more accord with the visual perception of human. After HSV color components of the image are obtained, the distance between two adjacent frames is calculated through an Euclidean formula, a proper threshold value is set according to the distance, and if the distance between the two adjacent frames is larger than a certain threshold value, the difference between the two frames is judged to be large. Through a large number of experiments, we find that the threshold required for different data sets is different, and the threshold is smaller as the maximum value of the generated H component in the data set is larger, and vice versa. In this patent, a threshold of a data set for a first test is set as an initial threshold t (i), H (i) is a maximum value in an H component generated by a first input data set, and H (i +1) is a maximum value in an H component generated by a subsequent input data set, and an equation is obtained as follows:
Figure BDA0003241446560000041
where T (i +1) represents the adaptive threshold of the following input data set and Δ represents the varying parameter.
And after determining a proper threshold value through the HSV component, comparing the similarity between two adjacent frames, if the two frames are similar, keeping the previous frame, and so on, thereby achieving the purpose of removing redundant images.
And 5: the data set after redundant images are removed is observed to find that a large number of images with motion blur and defocus blur exist in the images, the images do not contain useful information and need to be further removed, and the patent provides a method for removing the images by using frequency characteristics. The frequency spectrum of the image reflects the distribution of image energy in the frequency spectrum, and the high-frequency and low-frequency components of the frequency spectrum reflect the distribution of edge information and smooth information in the image. The information such as the edge and the detail of the blurred image is weakened by the influence of the blur, and the low-frequency information such as the smooth area is basically not influenced. The clear image has rich high-frequency information such as edges, textures, details and the like, so that the frequency spectrum energy of the clear image is distributed more than that of the blurred image in a high-frequency area, and the blurring degree and the definition degree of the image are judged according to the size of the high-frequency information. Fourier transform is carried out on the image to obtain a frequency spectrum, high-frequency components and low-frequency components are separated, each high-frequency component in the image is summed and then normalized, the magnitude of the high-frequency components in each image is compared, and a threshold value T is set through multiple experimental statistics for reducing the missed diagnosis probabilitydAnd (5) judging the image larger than the threshold value as a sharp image, otherwise, judging the image as a blurred image, and finally obtaining the video frame without the blurred image.
Step 6: the video frame data set after the operation still has a virtual focus image caused by the fact that the lens is too close to the wall of the organ, and the aim is to refine the imageThe patent proposes a method based on color histograms. Through statistical comparison, the frequency variation difference of the image in the R component (150-. We judge the virtual focus image and the normal image by calculating the frequency variance in the interval (150->3.2e-4And judging the image to be a virtual focus image. The final gastrointestinal tract endoscopy capsule video key frame is obtained through the method.
And 7: since the reduced image size can cause the loss of part of the feature information, in order for the clinician to better observe the image, the obtained key frames are restored to the video frames with the same size as the video resolution, and the final key frames of the capsule endoscope video are generated.
The invention provides a threshold-adaptive gastrointestinal tract capsule endoscopy video key frame extraction method, which can remove redundant images and images which do not contain useful information, obtain more simplified key frames, better help clinicians to diagnose gastrointestinal tract diseases, greatly improve diagnosis efficiency and have important practical application value.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.

Claims (5)

1. A gastrointestinal tract endoscopy video key frame extraction method with adaptive threshold is characterized by comprising the following steps:
step 1: taking a capsule endoscope video as input, and cutting the capsule endoscope video into video frames to obtain a video frame data set;
step 2: taking a video frame data set as input to obtain images with uniform size;
and step 3: for a video frame with a large black frame, the subsequent operation of removing redundant information by color matching is influenced, and the black frame is removed by automatic cutting;
and 4, step 4: similarity measurement is carried out by extracting color characteristics of two adjacent frames of images in a capsule endoscope video, the images acquired by the capsule endoscope and converted are converted to an HSV color space for analysis, HSV is divided into H, S, V three components which respectively represent hue, saturation and brightness, and compared with the RGB color space, HSV concepts are mutually independent and more accord with human visual perception; after HSV color components of the image are obtained, calculating the distance between two adjacent frames through an Euclidean formula, judging that the difference between the two frames is large if the distance is larger than a certain threshold value, carrying out self-adaptive threshold value judgment through the HSV color components, comparing the similarity between the two adjacent frames through the threshold values, retaining the previous frame if the two frames are similar, and removing redundant images by analogy;
and 5: setting a threshold value to distinguish blurred images from clear images by acquiring high-frequency information of each frame of image so as to remove the blurred images;
step 6: acquiring a key frame after the blurred image is removed, finding that a plurality of virtual focus images caused by the fact that the lens is too close to the wall of the organ do not contain any useful information, and setting a frequency change difference threshold value to remove the virtual focus images;
and 7: and restoring the processed simplified video frame into a video frame with the size equal to the video resolution, and finally outputting a video key frame.
2. The adaptive threshold gastrointestinal endoscopy video keyframe extraction method of claim 1, wherein in step 3, three different cropping methods are provided, assuming that the black pixels of the image after the size reduction are B, the total pixels are D, Y is the width of the image, and when B < D × α, no cropping operation is performed; when D is multiplied by alpha and less than B and less than D is multiplied by beta, the cutting frame is (1/5) Y; when B > D × β, a frame (1/3) × Y is cut, where α is 23% and β is 38%.
3. The adaptive threshold gastrointestinal endoscopy video keyframe extraction method according to claim 1 or 2, wherein in the step 4, the data set threshold of the first test is set as an initial threshold t (i), H (i) is a maximum value in H component generated by the first input data set, and H (i +1) is a maximum value in H component generated by the subsequent input data set, which is obtained by the following formula:
Figure FDA0003241446550000011
where T (i +1) represents the adaptive threshold of the following input data set and Δ represents the varying parameter;
and after the threshold value is determined through the HSV component, the similarity between two adjacent frames is compared, if the two frames are similar, the previous frame is reserved, and the like, so that the aim of removing the redundant image is fulfilled.
4. The adaptive threshold gastrointestinal endoscopy video key frame extraction method according to claim 1 or 2, wherein in the step 5, the frequency spectrum of the image reflects the distribution of image energy in the frequency spectrum, the high-frequency and low-frequency components of the frequency spectrum reflect the distribution of edge information and smooth information in the image, the edge and detail information of the blurred image is weakened by blurring, the low-frequency information such as a smooth region is basically not affected, the sharp image has abundant edge, texture and detail high-frequency information, so that the frequency spectrum energy is distributed more than the blurred image in the high-frequency region, and the blurring and sharpness of the image are judged according to the size of the high-frequency information; fourier transform is carried out on the image to obtain a frequency spectrum, high-frequency components and low-frequency components are separated, each high-frequency component in the image is summed and then normalized, the size of the high-frequency component in each image is compared, and a threshold value T is setdAnd (5) judging the image larger than the threshold value as a sharp image, otherwise, judging the image as a blurred image, and finally obtaining the video frame without the blurred image.
5. The adaptive threshold gastrointestinal endoscopy video keyframe extraction method as claimed in claim 1 or 2, wherein in the step 6, the virtual focus image and the positive focus image are judged by calculating (150-Normal image, when variance V > 3.2e-4And judging the image to be a virtual focus image.
CN202111021196.9A 2021-09-01 Gastrointestinal capsule endoscope video key frame extraction method with self-adaptive threshold Active CN113850299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111021196.9A CN113850299B (en) 2021-09-01 Gastrointestinal capsule endoscope video key frame extraction method with self-adaptive threshold

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111021196.9A CN113850299B (en) 2021-09-01 Gastrointestinal capsule endoscope video key frame extraction method with self-adaptive threshold

Publications (2)

Publication Number Publication Date
CN113850299A true CN113850299A (en) 2021-12-28
CN113850299B CN113850299B (en) 2024-05-14

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114767268A (en) * 2022-03-31 2022-07-22 复旦大学附属眼耳鼻喉科医院 Anatomical structure tracking method and device suitable for endoscope navigation system
CN115564712A (en) * 2022-09-07 2023-01-03 长江大学 Method for removing redundant frames of video images of capsule endoscope based on twin network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683660A (en) * 2015-01-29 2015-06-03 乐视网信息技术(北京)股份有限公司 Video denoising method and device
CN105913096A (en) * 2016-06-29 2016-08-31 广西大学 Extracting method for disordered image key frame
US20160314569A1 (en) * 2015-04-23 2016-10-27 Ilya Lysenkov Method to select best keyframes in online and offline mode
CN106851437A (en) * 2017-01-17 2017-06-13 南通同洲电子有限责任公司 A kind of method for extracting video frequency abstract
KR101870700B1 (en) * 2017-03-07 2018-06-25 광운대학교 산학협력단 A fast key frame extraction method for 3D reconstruction from a handheld video
CN111311562A (en) * 2020-02-10 2020-06-19 浙江华创视讯科技有限公司 Method and device for detecting ambiguity of virtual focus image
CN112270247A (en) * 2020-10-23 2021-01-26 杭州卷积云科技有限公司 Key frame extraction method based on inter-frame difference and color histogram difference
CN113112519A (en) * 2021-04-23 2021-07-13 电子科技大学 Key frame screening method based on interested target distribution

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683660A (en) * 2015-01-29 2015-06-03 乐视网信息技术(北京)股份有限公司 Video denoising method and device
US20160314569A1 (en) * 2015-04-23 2016-10-27 Ilya Lysenkov Method to select best keyframes in online and offline mode
CN105913096A (en) * 2016-06-29 2016-08-31 广西大学 Extracting method for disordered image key frame
CN106851437A (en) * 2017-01-17 2017-06-13 南通同洲电子有限责任公司 A kind of method for extracting video frequency abstract
KR101870700B1 (en) * 2017-03-07 2018-06-25 광운대학교 산학협력단 A fast key frame extraction method for 3D reconstruction from a handheld video
CN111311562A (en) * 2020-02-10 2020-06-19 浙江华创视讯科技有限公司 Method and device for detecting ambiguity of virtual focus image
CN112270247A (en) * 2020-10-23 2021-01-26 杭州卷积云科技有限公司 Key frame extraction method based on inter-frame difference and color histogram difference
CN113112519A (en) * 2021-04-23 2021-07-13 电子科技大学 Key frame screening method based on interested target distribution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
彭利民;: "基于自适应阈值的聚类算法提取关键帧研究", 上海应用技术学院学报(自然科学版), no. 01, 15 March 2008 (2008-03-15) *
彭同胜;刘小燕;龚军辉;蒋笑笑;: "基于颜色匹配和改进LBP的胶囊内镜视频缩减", 电子测量与仪器学报, no. 09, 15 September 2016 (2016-09-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114767268A (en) * 2022-03-31 2022-07-22 复旦大学附属眼耳鼻喉科医院 Anatomical structure tracking method and device suitable for endoscope navigation system
CN114767268B (en) * 2022-03-31 2023-09-22 复旦大学附属眼耳鼻喉科医院 Anatomical structure tracking method and device suitable for endoscope navigation system
CN115564712A (en) * 2022-09-07 2023-01-03 长江大学 Method for removing redundant frames of video images of capsule endoscope based on twin network
CN115564712B (en) * 2022-09-07 2023-07-18 长江大学 Capsule endoscope video image redundant frame removing method based on twin network

Similar Documents

Publication Publication Date Title
JP5094036B2 (en) Endoscope insertion direction detection device
KR102063492B1 (en) The Method and System for Filtering the Obstacle Data in Machine Learning of Medical Images
CN110738655B (en) Image report generation method, device, terminal and storage medium
US8768024B1 (en) System and method for real time detection of villi texture in an image stream of the gastrointestinal tract
US7907775B2 (en) Image processing apparatus, image processing method and image processing program
JP2010158308A (en) Image processing apparatus, image processing method and image processing program
CN111091559A (en) Depth learning-based auxiliary diagnosis system for small intestine sub-scope lymphoma
CN111784686A (en) Dynamic intelligent detection method, system and readable storage medium for endoscope bleeding area
Suman et al. Image enhancement using geometric mean filter and gamma correction for WCE images
CN113888518A (en) Laryngopharynx endoscope tumor detection and benign and malignant classification method based on deep learning segmentation and classification multitask
EP3016070A1 (en) Detection device, learning device, detection method, learning method, and program
Chen et al. Ulcer detection in wireless capsule endoscopy video
Ghosh et al. Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy
CN113850299B (en) Gastrointestinal capsule endoscope video key frame extraction method with self-adaptive threshold
CN113850299A (en) Gastrointestinal tract capsule endoscopy video key frame extraction method capable of self-adapting to threshold
CN116071337A (en) Endoscopic image quality evaluation method based on super-pixel segmentation
CN116385340A (en) Medical endoscope image rapid defogging method and system
EP4241650A1 (en) Image processing method, and electronic device and readable storage medium
CN113679327B (en) Endoscopic image acquisition method and device
Arnold et al. Indistinct frame detection in colonoscopy videos
CN112053399B (en) Method for positioning digestive tract organs in capsule endoscope video
Li et al. Capsule endoscopy video boundary detection
WO2021181440A1 (en) Image recording system, and image recording method
Alizadeh et al. Effects of improved adaptive gamma correction method on wireless capsule endoscopy images: Illumination compensation and edge detection
Yadav et al. Comparative analysis of different enhancement method on digital mammograms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant