CN112488979A - Endoscope image recognition method - Google Patents
Endoscope image recognition method Download PDFInfo
- Publication number
- CN112488979A CN112488979A CN201910768365.1A CN201910768365A CN112488979A CN 112488979 A CN112488979 A CN 112488979A CN 201910768365 A CN201910768365 A CN 201910768365A CN 112488979 A CN112488979 A CN 112488979A
- Authority
- CN
- China
- Prior art keywords
- image
- threshold
- probability distribution
- color space
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
An endoscope image identification method comprises the steps of converting an original image into a first color space, and extracting an image of the original image in a first color channel of the first color space; and converting the original image into a second color space, and extracting an image of the original image in a second color channel of the second color space. And weighting and normalizing the image of the first color channel and the image of the second color channel to generate a normalized probability map of the strengthened target area. Based on the normalized probability map, obtaining the probability distribution similarity of the image interproximal region; obtaining a target region extraction threshold value based on the inter-neighbor region probability distribution similarity; and identifying a target region based on the target region extraction threshold. The method has high image processing efficiency and can effectively detect the bleeding symptoms in a small area. The method can be suitable for extracting the specific image area with larger difference between the area to be detected and the background area, has small calculated amount and steps, and is suitable for application in different environments.
Description
Technical Field
The invention relates to an endoscope image processing method, in particular to a capsule endoscope image target area identification method.
Background
With the increasing development of computer image processing technology, the processing and analysis of images are no longer limited to observation relying on the human eye. In the medical field, corresponding computer-assisted screening has become very important. For example, in the course of intestinal tract examination and diagnosis, capsule endoscopy is often used to detect whether there is a bleeding region in the intestinal tract. The capsule endoscope is swallowed by the subject into the abdomen and passes through the esophagus and stomach to reach the intestinal tract, thereby capturing and providing images of the interior of the intestinal tract. The intestinal tract detection by using the capsule endoscope can last for 8 hours once, and about 57600 images are collected in one detection based on the setting of sampling 2 images per second. If only the doctor screens and manually checks by naked eyes, the workload of reading and screening all images for completing one-time intestinal tract detection is heavy, the efficiency is low, and the diagnosis error rate is high.
In the face of massive image diagnosis requirements, a threshold segmentation method based on a color feature space is often used for detecting a specific target region in an image, for example, detecting a bleeding region in the image. In actual detection, bleeding areas present in an image to be determined have different shapes. According to actual conditions, a large bleeding area of the intestinal tract of a patient can exist, and a bleeding area with a small area can also exist. The traditional method for judging the small-area bleeding is often interfered by an image background area, so that the judgment is inaccurate.
The invention aims to provide an improved endoscope image identification method, which is used for accurately and effectively detecting an intestinal bleeding area through an endoscope image and assisting medical staff in diagnosis.
Disclosure of Invention
The invention provides an endoscope image identification method, which comprises the following steps: converting an original image into a first color space, and extracting an image of the original image in a first color channel of the first color space; and converting the original image into a second color space, and extracting an image of the original image in a second color channel of the second color space. And weighting and normalizing the image of the first color channel and the image of the second color channel to generate a normalized probability map of the strengthened target area. Obtaining the probability distribution similarity of the image neighborhood based on the normalized probability map; obtaining a target region extraction threshold value based on the inter-neighbor region probability distribution similarity; and identifying a target region based on the target region extraction threshold.
Compared with the prior art, the method has higher image processing efficiency and can effectively detect the bleeding symptoms in a smaller area. The method can be suitable for extracting the specific image area with larger difference between the area to be detected and the background area, has small calculated amount and steps, and is suitable for application in different environments.
Brief description of the drawings
Fig. 1 is a flowchart of an endoscopic image recognition method according to an embodiment of the present invention. Detailed Description
It will be appreciated that in addition to the example embodiments described, the components of the embodiments, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the example embodiments, as represented in the figures, is not intended to limit the scope of the embodiments, as claimed, but is merely representative of the example embodiments.
Reference in the specification to "one embodiment," "another embodiment," or "an embodiment" (or the like) means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or the like in various places throughout this specification are not necessarily all referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments. One skilled in the relevant art will recognize that the various embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. That is, in other instances, some or all of the known structures, materials, or operations may not be shown or described in detail to avoid obscuring.
The invention is described below with reference to the drawings. Fig. 1 is a schematic diagram of an endoscopic image recognition method according to an embodiment of the present invention, such as an intestinal bleeding area detection method based on an endoscopic image. As shown in fig. 1, an endoscopic image recognition method 100 according to the present embodiment includes the steps of:
step 110: an original image is acquired. Wherein step 110 comprises using an image acquisition instrument to perform image acquisition on the target to be inspected. For example, in the medical field for intestinal tract detection, step 110 includes image acquisition of the intestinal tract of a subject using an image acquisition instrument such as a capsule endoscope.
Step 122: the acquired raw image is converted into a first color space, e.g. an LAB color space. Specifically, step 122 includes inputting the original image in the image processing system, and outputting the image in the LAB color space after image processing.
Step 124: an image of a first color channel of the LAB color space, e.g., an a-channel image of the LAB color space, is extracted. Specifically, step 124 includes inputting an image of the LAB color space in the image processing system, and outputting an image of the a-channel of the LAB color space from the image processing system after image processing.
Step 132: the captured raw image is converted to a second color space, such as the CMYK color space. Specifically, step 132 includes inputting an original image in the image processing system, and outputting an image of CMYK color space from the image processing system after image processing.
Step 134: a second color channel image of the CMYK color space, for example, an image of the M channel, is extracted. Specifically, step 134 includes inputting images of CMYK color spaces in an image processing system, and outputting images of M channels of the CMYK color spaces from the image processing system after image processing.
Step 140: and weighting and normalizing the image of the A channel in the LAB color space and the image of the M channel in the CMYK color space to generate a normalized probability map. For example, assuming that a (x) is the value corresponding to the pixel x of the LAB color space a channel (LAB-a) and M (x) is the value corresponding to the pixel x of the CMYK color space M channel (CMYK-M), first, the two images are normalized separately and then weighted and summed according to the following formula:
F(x)=wA(x)+(1-w)M(x)(1)
where F (x) is the corresponding value at pixel x in the normalized probability map. w is a predetermined weight, 0< w < 1.
In this embodiment, the image of the a channel in the LAB color space and the image of the M channel in the CMYK color space are weighted and normalized by a weight of 0.5 to 0.5, thereby generating a normalized probability map. One skilled in the art will appreciate that other weights may be used for weighted normalization. For example, an image of an a channel of the LAB color space and an image of an M channel of the CMYK color space are weighted-normalized by a weight of 0.4 to 0.6. Alternatively, the image of the a channel of the LAB color space and the image of the M channel of the CMYK color space are weighted-normalized by a weight of 0.3 to 0.7, and so on.
Step 150: and obtaining the probability distribution similarity of the image interproximal regions based on the normalized probability map.
Step 160: and obtaining an optimal extraction threshold of the target area, for example, an optimal extraction threshold of a bleeding area displayed in the image, based on the probability distribution similarity of the image interproximal areas.
Step 170: the target area is determined based on the target area optimal extraction threshold, for example, the bleeding area is determined based on the bleeding area optimal extraction threshold. Specifically, step 170 includes: and judging the area in the normalized probability map larger than the optimal extraction threshold as a bleeding area.
According to a preferred embodiment, step 150 obtains the image inter-neighbor region probability distribution similarity based on the normalized probability map, including performing region growing threshold segmentation on the normalized probability map using the selected plurality of threshold parameters to obtain the image inter-neighbor region probability distribution similarity.
According to a preferred embodiment, step 150 obtains the inter-neighborhood probability distribution similarity of the image based on the normalized probability map, and performs the threshold segmentation of the region growing on the normalized probability map by using the selected multiple threshold parameters, further comprising obtaining the inter-neighborhood probability distribution similarity according to the following formula:
wherein p isn、pn-1Respectively representing the nth and nth-1 thresholds, the set of thresholds being P ═ P1,p2,…,pn-1,pn,…,pNEach threshold is in the range of 0 to 1, and the values of the thresholds are decreased in turn, i.e. p1>p2,>…,>pn-1>pn>…,>pN。H(pnI) denotes by pnThe value of the ith channel in the probability histogram corresponding to the image region obtained for the segmentation threshold,wherein | represents the total number of elements in the calculation set, Rn is a pixel point set obtained by taking pn as a segmentation threshold, and RnWhere x is a pixel point in the image, and Pr (x) represents a value at the pixel point x in the normalized probability image obtained in step 150. X in the expression H (pn, i)n,i={x|x∈RnI · s > Pr (x) > (i-1) s }, where s is the width of each channel of the histogram set when calculating the probability histogram, and s is 1/M.
In the preferred embodiment, N, p1And pNAre set to 5, Prm-0.1 and Prm-0.3, respectively, where Prm represents the maximum value among all pixel positions in the current normalized probability map, p1And pNThe threshold values are distributed at equal intervals; m is set to 20 in this preferred embodiment.
As will be appreciated by those skilled in the art, the values of the various parameters set forth in the preferred embodiment (e.g., pair N, p)1And pNAnd setting of M) are merely exemplary. The parameter values can be adjusted appropriately by those skilled in the art according to actual needs.
According to a preferred embodiment, the step 160 of obtaining the optimal extraction threshold of the target region based on the similarity of probability distribution of image neighboring regions includes the following steps:
if the probability distribution similarity S of the adjacent regions2Less than threshold t0Then p is2Is selected as the optimal extraction threshold;
if the probability distribution similarity S of the adjacent regions2If the value is larger than or equal to a preset threshold value t0, continuously checking the probability distribution similarity of the subsequent adjacent regions, and if the value is Sk>Sk-1 or the absolute value of the difference between the two is greater than a first threshold value t1, then pkIs selected as the optimal extraction threshold;
if none of the above conditions is met, the threshold pNIs selected as the optimal extraction threshold. T0 and t1 are set to 0.4 and 0.07, respectively, in this embodiment.
Those skilled in the art will appreciate that the values of the various parameters set in the preferred embodiment (e.g., the settings for t0 and t 1) are exemplary only. The parameter values can be adjusted appropriately by those skilled in the art according to actual needs.
As referred to herein, the singular forms "a", "an" and "the" may be construed to include the plural forms "one or more" unless expressly stated otherwise.
The present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited. Many modifications and variations will be apparent to practitioners skilled in this art. The example embodiments have been chosen and described in order to explain the principles and practical application, and to enable others of ordinary skill in the art to understand the various embodiments of the disclosure for various modifications as are suited to the particular use contemplated.
Thus, although the illustrative example embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the description is not limiting, and that various changes, modifications, substitutions, or alterations may be effected therein by one of ordinary skill in the pertinent art without departing from the scope of the present disclosure and provided claims.
Claims (4)
1. An endoscopic image recognition method, the method comprising:
converting the original image to a first color space;
extracting an image of the original image in a first color channel of the first color space;
converting the original image to a second color space;
extracting an image of a second color channel of the original image in the second color space;
weighting and normalizing the image of the first color channel and the image of the second color channel to generate a normalized probability map of the strengthened target area;
obtaining the probability distribution similarity of the image interproximal regions based on the normalized probability image;
obtaining an optimal extraction threshold of a target region based on the probability distribution similarity of the adjacent regions;
identifying a target region based on the target region extraction threshold.
2. The method of claim 1, wherein obtaining image inter-neighbor region probability distribution similarity based on the normalized probability image comprises: and carrying out region growing threshold segmentation on the normalized probability map by using a plurality of selected threshold parameters to obtain the similarity of the probability distribution of the image interproximal regions.
3. The method of claim 2, wherein the performing region-growing threshold segmentation on the normalized probability map using the selected plurality of threshold parameters to obtain image inter-neighbor region probability distribution similarities comprises obtaining inter-neighbor region probability distribution similarities based on the following formula:
wherein the content of the first and second substances,
Snis the inter-neighbor region probability distribution similarity, pn、pn-1Respectively, the nth and nth-1 thresholds, assuming that the set of thresholds is P ═ P1,p2,…,pn-1,pn,…,pNEach threshold is in the range of 0 to 1, and the values of the thresholds are decreased in turn, i.e. p1>p2,>…,>pn-1>pn>…,>pN。H(pnI) denotes by pnObtaining the value of the ith channel in the probability histogram corresponding to the image region obtained for the segmentation threshold:
where, | - | represents the total number of elements in the computation set, RnIs represented by pnA set of pixel points obtained for the segmentation threshold:
Rn={x|Pr(x)>pn}
wherein x is a pixel point in the image, Pr (x) represents the value of the pixel point x in the normalized probability map obtained in the step (2), and the expression H (p)nX in i)n,iThe definition is:
Xn,i={x|x∈Rn,i·s>Pr(x)>(i-1)s}
wherein s is the width of each channel of the histogram set when the probability histogram is calculated, and s is 1/M.
4. The method of claim 3, wherein the probability distribution is similar based on the inter-neighbor regionsObtaining the target region extraction threshold includes, if S2Less than threshold t0Then p is selected2The threshold value is the optimal extraction threshold value; if S is2Greater than or equal to threshold t0Then continuing to check the probability distribution similarity of the subsequent adjacent regions, if S isk>Sk-1Or the absolute value of the difference between the two is greater than t1Then p is selectedkThe threshold value is the optimal extraction threshold value; if none of the above conditions is met, then p is selectedNThe threshold is extracted for the optimum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910768365.1A CN112488979A (en) | 2019-08-20 | 2019-08-20 | Endoscope image recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910768365.1A CN112488979A (en) | 2019-08-20 | 2019-08-20 | Endoscope image recognition method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112488979A true CN112488979A (en) | 2021-03-12 |
Family
ID=74919805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910768365.1A Pending CN112488979A (en) | 2019-08-20 | 2019-08-20 | Endoscope image recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112488979A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255644A (en) * | 2021-05-10 | 2021-08-13 | 青岛海信移动通信技术股份有限公司 | Display device and image recognition method thereof |
-
2019
- 2019-08-20 CN CN201910768365.1A patent/CN112488979A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255644A (en) * | 2021-05-10 | 2021-08-13 | 青岛海信移动通信技术股份有限公司 | Display device and image recognition method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3848891B1 (en) | Computer vision-based method and system for monitoring speed of intestinal lens removal in real time | |
RU2765619C1 (en) | Computer classification of biological tissue | |
US7907775B2 (en) | Image processing apparatus, image processing method and image processing program | |
JP5281826B2 (en) | Image processing apparatus, image processing program, and image processing method | |
WO2015141302A1 (en) | Image processing device, image processing method, and image processing program | |
Riegler et al. | Eir—efficient computer aided diagnosis framework for gastrointestinal endoscopies | |
CN109614869B (en) | Pathological image classification method based on multi-scale compression reward and punishment network | |
KR102103280B1 (en) | Assistance diagnosis method for large intestine disease based on deep learning | |
Ghosh et al. | A statistical feature based novel method to detect bleeding in wireless capsule endoscopy images | |
KR102338018B1 (en) | Ultrasound diagnosis apparatus for liver steatosis using the key points of ultrasound image and remote medical-diagnosis method using the same | |
JP4832794B2 (en) | Image processing apparatus and image processing program | |
Al Mamun et al. | Discretion way for bleeding detection in wireless capsule endoscopy images | |
Ghosh et al. | Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy | |
CN111862090A (en) | Method and system for esophageal cancer preoperative management based on artificial intelligence | |
US20150206064A1 (en) | Method for supervised machine learning | |
KR102095730B1 (en) | Method for detecting lesion of large intestine disease based on deep learning | |
Kumar et al. | Brain tumour image segmentation using MATLAB | |
Al Mamun et al. | Ulcer detection in image converted from video footage of wireless capsule endoscopy | |
CN112488979A (en) | Endoscope image recognition method | |
JP4855709B2 (en) | Image processing apparatus, image processing method, and image processing program | |
Vieira et al. | Automatic detection of small bowel tumors in endoscopic capsule images by ROI selection based on discarded lightness information | |
Wilhelm et al. | A deep learning approach to video fluoroscopic swallowing exam classification | |
US11164315B2 (en) | Image processing method and corresponding system | |
Iakovidis et al. | Unsupervised summarisation of capsule endoscopy video | |
Hossain et al. | Easy scheme for ulcer detection in wireless capsule endoscopy images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |