CN109241963B - Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image - Google Patents

Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image Download PDF

Info

Publication number
CN109241963B
CN109241963B CN201810884973.4A CN201810884973A CN109241963B CN 109241963 B CN109241963 B CN 109241963B CN 201810884973 A CN201810884973 A CN 201810884973A CN 109241963 B CN109241963 B CN 109241963B
Authority
CN
China
Prior art keywords
image
pixel
label
area
bleeding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810884973.4A
Other languages
Chinese (zh)
Other versions
CN109241963A (en
Inventor
丁勇
刘毅
胡拓
罗述杰
冯彪
陈宏达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810884973.4A priority Critical patent/CN109241963B/en
Publication of CN109241963A publication Critical patent/CN109241963A/en
Application granted granted Critical
Publication of CN109241963B publication Critical patent/CN109241963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The invention discloses a technical method for realizing intelligent capsule gastroscope image bleeding point identification based on Adaboost machine learning algorithm. The method comprises the following steps: firstly, converting images in an input capsule gastroscope bleeding image set into an HIS space through color space conversion, extracting the mean value of three channels of each image in an HSI color space to construct a three-dimensional vector as an image-level characteristic vector matrix, and establishing a label matrix according to the category of each image for Adaboost training to obtain an image classifier; secondly, respectively carrying out threshold segmentation pretreatment on the color normal image set and the color darker image set, and filtering out an invalid region and an excessively dark and excessively bright region in the original image; further, H, S, I, A, M five-channel color data of the residual pixels after threshold segmentation are respectively extracted to construct a five-dimensional feature vector for Adaboost to train and obtain a pixel classifier; and finally, a post-processing optimization display means is adopted, so that the final recognition effect is more favorable for observation and diagnosis.

Description

Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to an intelligent identification method for bleeding points in capsule gastroscope images based on Adaboost machine learning.
Background
Digestive tract diseases have the characteristics of difficult discovery in early stage and difficult radical treatment in later stage, and once a patient delays the optimal diagnosis time in early stage and is not treated in time, the trouble of the digestive tract diseases cannot be solved for a long time. Therefore, compared with the method of treating the digestive tract diseases after the diseases are suffered, the method for detecting the digestive tract diseases is improved, and the method has important significance for reducing the burden of the digestive tract diseases of people.
However, most of the detection means commonly used in hospitals carry out direct detection through a traditional insertion gastroscope and a traditional rectoscope. A serious drawback of such detection means is that they cause serious discomfort to the patient. In order to reduce the pain of the traditional gastroscope to the patient, the wireless capsule endoscope is produced. Generally, wireless capsule endoscopes transmit images of the inner wall of the gastrointestinal tract at a rate of at least 2 frames per second, and the residence time in the human body is about 8 hours, and after one patient completes one examination, the wireless capsule endoscope generates tens of thousands of images. In such a huge number of images, the doctor is more concerned with only those images or certain areas in the images having pathological features such as bleeding and ulcer, and the proportion of useful images is small. It is very laborious and tedious to manually perform screening identification. Therefore, the computer is used for auxiliary diagnosis, and the intelligent recognition of bleeding points is of far-reaching significance.
Disclosure of Invention
The invention aims to perform positioning and marking of bleeding points and filtering of non-bleeding areas on capsule gastroscope images with the bleeding points by using an Adaboost machine learning algorithm, so that a doctor can conveniently and quickly perform diagnosis work.
The technical scheme adopted by the invention is as follows:
firstly, converting an input image into an HSI (hue, saturation and value) space through color space conversion, extracting the mean value of three channels of each image in the HSI color space, and constructing a three-dimensional vector as an image-level feature vector for Adaboost to train so as to obtain an image classifier; secondly, because the capsule gastroscope image has a metal frame and an area which is too bright and too dark, pixels in the areas are filtered by a threshold segmentation method, H, S, I, A, M five-channel color data of the residual pixels after threshold segmentation are respectively extracted to construct a five-dimensional feature vector for Adaboost to train and obtain a pixel classifier; finally, the invention carries out post-processing on the image preliminarily identified by the pixel classifier, and filters some misjudged pixels so as to optimize the identification effect.
The technical scheme adopted by the invention for solving the technical problems is as follows:
step (1): input capsule gastroscope hemorrhage image set DtThen according to the overall depth of the image color to DtClassifying into normal set DtAAnd set of color parts DtB
Step (2): input D in sequencetAnd performing color space conversion on the image input each time, and converting the image from an RGB space to an HSI space. In HSI color space, D is calculated respectivelytThe mean value of three channels in each image is used for constructing a feature vector, such as an image level feature vector of the ith image:
Fimg(i)={mean(Hi),mean(Si),mean(Ii)} (1)
and (3): d obtained in the step (2)tThe image-level feature vectors of each image are integrated into a matrix, namely an N x 3 feature vector matrix T is obtainedimg(where N is a training set DtThe number of images in (c).
And (4): establishing a corresponding Nx 1 label matrix T according to the corresponding category of each image obtained by classification in the step (1)img_labelFor example, if the ith image is of normal color, Timg_label(i) If the color is darker than 1, T isimg_label(i)=-1。
And (5): and (3) establishing an effective data window by setting the horizontal and vertical coordinates of the effective area of the image, and cutting the image of each image in the image set input in the step (1) to remove the ineffective area of the image corner. The specific window coordinate is set according to the parameters of the gastroscope camera and the use environment.
And (6): filtering the over-bright and over-dark areas in each image obtained in the step (5) by setting a threshold value according to the I channel value of each pixel,
T1≤I≤T2 (2)
wherein, T1And T2Respectively a low threshold and a high threshold. Pixels that do not meet the above range are filtered out, and these pixels are directly determined as non-bleeding areas and do not participate in the subsequent training.
And (7): for D obtained in the step (1)tAAnd extracting the pixel feature vector. First, for DtAPerforming the preprocessing of the steps (5) and (6) on each image; then, respectively in HSI, LAB and CMYK color spacesH, S, I, A, M color data of five channels are extracted for each pixel of the pre-processed image, and a pixel-level feature vector of the pixel is formed.
Fpixel_tA(i)={Hi,Si,Ii,Ai,Mi} (3)
Wherein, the H, S, I channel data can be calculated by RGB,
Figure BDA0001755414760000031
the calculation formula of the a-channel is as follows,
A=500(f(X/0.950456)-f(Y)) (5)
wherein the content of the first and second substances,
Figure BDA0001755414760000032
the M-channel data may be calculated from RGB,
Figure BDA0001755414760000033
wherein the content of the first and second substances,
K=1-MAX(R,G,B) (8)
and (8): analogously to step (7), for D obtained in step (1)tBAnd extracting the pixel characteristic vector to obtain a pixel level characteristic vector.
Fpixel_tB(i)={Hi,Si,Ii,Ai,Mi} (9)
And (9): determining the category of each pixel in the steps (7) and (8) according to the mask image corresponding to each image in the data set, and establishing 2 label matrixes Tpixel_label_AAnd Tpixel_label_B
Step (10): utilizing the eigenvector matrix T obtained in the step (3)imgAnd the label matrix T obtained in the step (4)img_labelAdaboost image classifier training is carried out, and the score with optimal performance is obtained through parameter adjustmentAnd (4) a classifier. In the training process, parameters needing manual adjustment are the number K of times of loop training of the Adaboost algorithm, a regularization item v and the maximum splitting point number S of the CART decision tree. The S determines the performance strength of the weak classifiers obtained by each round of training, K, v determines the performance of the finally integrated strong classifiers, the larger the S is, the K can be properly reduced, but the too large S can cause the overfitting of the strong classifiers, and the smaller the v is, the weaker the overfitting phenomenon of the classifiers is, but the larger the K is needed. Therefore, the three parameters need to be set cooperatively, and the ideal parameter setting can be obtained by traversing the combination mode of the three parameters within a proper range according to a certain principle.
Step (11): utilizing the pixel-level feature vector matrix F obtained in the steps (7) and (8)pixel_tAAnd Fpixel_tBAnd the label matrix T obtained in the step (9)pixel_label_AAnd Tpixel_label_BAnd (5) carrying out Adaboost pixel classifier training, and obtaining a classifier with optimal performance through parameter adjustment. The training method of the Adaboost pixel classifier is the same as the step (10).
Step (12): classifying the image by using the image classifier trained in the step (10), identifying bleeding pixels in the gastroscope image by using the pixel classifier obtained in the step (11), and removing the bleeding pixels with smaller area of the communication area.
Step (13): and (4) displaying an image in the circular area according to the centroid coordinate and the area of the bleeding communication area, the centroid coordinate as the circle center and the area of the communication area as the circle area based on the detection result obtained in the step (12) so as to facilitate the diagnosis of a doctor.
The invention has the beneficial effects that:
the bleeding point identification model established based on the Adaboost machine learning algorithm can accurately identify most bleeding points in capsule gastroscope images and filter non-bleeding areas, and can also identify key bleeding points for a few images with poor imaging quality, and the model has higher application effect and significance in actual diagnosis.
Drawings
Fig. 1 is a structural block diagram for realizing intelligent capsule gastroscope image bleeding point identification based on Adaboost machine learning.
FIG. 2 is a diagram of bleeding point images and position calibration in a capsule gastroscope bleeding point test set.
Fig. 3 is a schematic diagram of image display effect of bleeding point identification.
Detailed Description
The method of the present invention will be further described with reference to the accompanying drawings.
As shown in fig. 1, an intelligent capsule gastroscope image bleeding point identification method based on Adaboost machine learning specifically includes the following steps:
step (1): 1893 capsule gastroscope image sets D with bleeding points obtained by acquisition are inputtAnd corresponding mask image, and then D according to the overall shade of the image colortPerforming manual classification, and classifying into normal set DtAAnd set of color parts DtB
Step (2): sequentially inputting D in Matlab environmenttThe color space conversion is carried out on the image input each time, the image is converted from the RGB space to the HSI space, and then D is calculated in the HSI color spacetThe mean value of three channels in each image is used for constructing a feature vector, such as an image level feature vector of the ith image:
Fimg(i)={mean(Hi),mean(Si),mean(Ii)} (1)
and (3): d obtained in the step (2)tThe image-level feature vectors of each image are integrated into a matrix, namely an N x 3 feature vector matrix T is obtainedimg(where N is a training set DtThe number of images in (c).
And (4): establishing a corresponding Nx 1 label matrix T according to the category corresponding to each image obtained by manual classification in the step (1)img_labelFor example, if the ith image is of normal color, Timg_label(i) If the color is darker than 1, T isimg_label(i)=-1。
And (5): and (3) establishing an effective data window by setting the horizontal and vertical coordinates of the effective area of the image, and cutting the image of each image in the image set input in the step (1) to remove the ineffective area of the image corner. The specific window coordinate is set according to the parameters of the gastroscope camera and the use environment. In this embodiment, the pixel coordinate range set for the cutting of the image invalid region is:
Figure BDA0001755414760000051
wherein x and y are pixel coordinate values.
And (6): filtering the over-bright and over-dark areas in each image obtained in the step (5) by setting a threshold value according to the I channel value of each pixel,
T1≤I≤T2 (3)
wherein, T1And T2Respectively, a low threshold and a high threshold, in this embodiment, T1=0.235,T20.863. Pixels that do not meet the above range are filtered out, and these pixels are directly determined as non-bleeding areas and do not participate in the subsequent training.
And (7): for D obtained in the step (1)tAAnd extracting the pixel feature vector. First, for DtAPerforming the preprocessing of the steps (5) and (6) on each image; then, H, S, I, A, M color data of five channels in total are extracted from each pixel of the preprocessed image in HSI, LAB, and CMYK color spaces, respectively, to construct a pixel-level feature vector of the pixel.
Fpixel_tA(i)={Hi,Si,Ii,Ai,Mi} (4)
Wherein, the H, S, I channel data can be calculated by RGB,
Figure BDA0001755414760000061
the calculation formula of the a-channel is as follows,
A=500(f(X/0.950456)-f(Y)) (6)
wherein the content of the first and second substances,
Figure BDA0001755414760000062
the M-channel data may be calculated from RGB,
Figure BDA0001755414760000063
wherein the content of the first and second substances,
K=1-MAX(R,G,B) (9)
and (8): analogously to step (7), for D obtained in step (1)tBAnd extracting the pixel characteristic vector to obtain a pixel level characteristic vector.
Fpixel_tB(i)={Hi,Si,Ii,Ai,Mi} (10)
And (9): determining the category of each pixel in the steps (7) and (8) according to the mask image corresponding to each image in the data set, and establishing 2 label matrixes Tpixel_label_AAnd Tpixel_label_B
Step (10): calling a Matlab2015b self-contained machine learning tool kit classificationLearner, and utilizing the feature vector matrix T obtained in the step (3)imgAnd the label matrix T obtained in the step (4)img_labelAnd (5) carrying out Adaboost image classifier training. In this embodiment, the Maximum Number of split nodes (Maximum Number of splits) is set to control the performance strength of the weak learners obtained from each round of Adaboost training, the Number of weak learners (Number of learners) is set to control the efficiency of Adaboost training and the performance of the finally obtained strong classifiers, and the step size (Learning rate) is set to control the over-fitting resistance of the strong classifiers. After repeated training and parameter adjustment for many times, the parameter of the image classifier is set to be 3 in the maximum splitting node number, 15 in the weak learner number and 0.1 in the step length.
Step (11): utilizing the pixel-level feature vector matrix F obtained in the steps (7) and (8)pixel_tAAnd Fpixel_tBAnd the label matrix T obtained in the step (9)pixel_label_AAnd Tpixel_label_BAnd (5) carrying out Adaboost pixel classifier training, and obtaining a classifier with optimal performance through parameter adjustment. The training method of the Adaboost pixel classifier is the same as the step (10). In the present embodiment, for the normal set D for colorstAThe parameters of the pixel classifier are set to be 6 at the maximum splitting node number, 15 at the weak classifier number and 0.1 at the step length; for set of color partial depths DtBThe parameters of the pixel classifier of (1) are set to be that the maximum split node number is 10, the weak classifier number is 100, and the step length is 0.1.
Step (12): classifying the image by using the image classifier trained in the step (10), identifying bleeding pixels in the gastroscope image by using the pixel classifier obtained in the step (11), and removing the bleeding pixels with smaller area of the communication area. In this embodiment, the specific method is as follows: and (4) sequentially identifying the pixels in the gastroscope image by using the pixel classifier obtained in the step (11), obtaining the label of each identified pixel, and obtaining a binary image according to the label and the pixel coordinate of each pixel, wherein if the label shows white for the pixels with bleeding and black for the pixels without bleeding. Further, the number, the area size and the centroid of the white connected regions are obtained under the Matlab platform. By setting a reasonable threshold value for the area, the white area with the too small area is corrected to be black (i.e. the bleeding label is corrected to be a non-bleeding label), so that correction is realized.
Step (13): and (4) displaying an image in the circular area according to the centroid coordinate and the area of the bleeding communication area, the centroid coordinate as the circle center and the area of the communication area as the circle area based on the detection result obtained in the step (12) so as to facilitate the diagnosis of a doctor. In this embodiment, the white area in the binary image obtained in step (12) is the identified bleeding point area. For the area, the centroid of the area is taken as the center of a circle, and the corresponding actual image in the circular area with the area 2 times is displayed, so that a good display effect on the bleeding point can be obtained.
To perform the performance evaluation of the proposed method, in this example, the test was performed using a capsule gastroscope bleeding point test set co-constructed by this unit and the partner company. 1863 bleeding point images are in total in the test set, and each bleeding point image corresponds to a bleeding point position calibration chart, as shown in fig. 2.
In the test, the trained model is used for processing an input test image, and the image-level identification performance of the model is evaluated through the identification effect of the automatically identified image on a bleeding point and the filtering effect of a non-bleeding area. The test effect of this example is shown in fig. 3. Therefore, based on the method provided by the invention, the bleeding point identified and displayed in an optimized way is matched with the actual bleeding point, so that a doctor can be assisted to make correct diagnosis.
Further, in order to evaluate the performance of the bleeding point detection algorithm provided by the present invention, the following analysis indexes are adopted in the test of this embodiment:
(ii) Accuracy (Accuracy):
Figure BDA0001755414760000081
wherein tp (true positive) represents the number of actually diseased and diagnosed lesions, fp (false positive) represents the number of misjudged actually normal tissues as lesions, tn (true negative) represents the number of actually normal tissues and diagnosed as normal tissues, and fn (false negative) represents the number of actually diseased but misjudged normal tissues.
Specificity (Specificity):
Figure BDA0001755414760000082
specificity is mainly used to reflect the ability to identify non-lesions, with higher values being better.
Table 1 lists the performance of each index for bleeding point detection. Therefore, the method provided by the invention can obtain high bleeding point detection accuracy and specificity, and can greatly assist in improving the diagnosis efficiency of doctors.
TABLE 1 test results on test image sets
Figure BDA0001755414760000083
Figure BDA0001755414760000091

Claims (5)

1. An intelligent identification method for bleeding points in capsule gastroscope images based on Adaboost machine learning is characterized by comprising the following steps:
step (1): input capsule gastroscope hemorrhage image set DtAccording to the total depth of the image colorstClassifying into normal set DtAAnd set of color parts DtB
Step (2): input D in sequencetThe color space conversion is carried out on the image input each time, the image is converted from the RGB space to the HSI space, and D is respectively calculated in the HSI color spacetConstructing a feature vector by the mean value of three channels of each image, wherein the image-level feature vector of the ith image is as follows:
Fimg(i)={mean(Hi),mean(Si),mean(Ii)} (1)
and (3): d obtained in the step (2)tThe image-level feature vectors of each image are integrated into a matrix, namely an N x 3 feature vector matrix T is obtainedimg(ii) a Wherein N is a training set DtThe number of images in;
and (4): establishing a corresponding Nx 1 label matrix T according to the corresponding category of each image obtained by classification in the step (1)img_labelIf the ith image is normal in color, then Timg_label(i) If the color is darker than 1, T isimg_label(i)=-1;
And (5): setting the horizontal and vertical coordinates of the effective area of the image, constructing an effective data window, and carrying out image cutting on each image in the image set input in the step (1) to remove the ineffective area of the image corner;
and (6): setting a threshold value according to the I channel value of each pixel, filtering out the over-bright and over-dark areas in each image obtained in the step (5),
T1≤I≤T2 (2)
wherein, T1And T2The low threshold value and the high threshold value are respectively adopted, pixels which do not meet the range are filtered, and the pixels are directly judged as non-bleeding areas and do not participate in subsequent training;
and (7): for D obtained in the step (1)tAExtracting the pixel feature vector by first extracting DtAPerforming the preprocessing of the steps (5) and (6) on each image; then, extracting H, S, I, A, M color data of five channels in total of each pixel of the preprocessed image in HSI, LAB and CMYK color spaces respectively to form a pixel-level feature vector of each pixel;
Fpixel_tA(i)={Hi,Si,Ii,Ai,Mi} (3)
and (8): for D obtained in the step (1)tBExtracting the pixel characteristic vector to obtain a pixel level characteristic vector;
Fpixel_tB(i)={Hi,Si,Ii,Ai,Mi} (9)
and (9): determining the category of each pixel in the steps (7) and (8) according to the mask image corresponding to each image in the data set, and establishing 2 label matrixes Tpixel_label_AAnd Tpixel_label_B
Step (10): utilizing the eigenvector matrix T obtained in the step (3)imgAnd the label matrix T obtained in the step (4)img_labelCarrying out Adaboost image classifier training to obtain an image classifier with optimal performance;
step (11): utilizing the pixel-level feature vector matrix F obtained in the steps (7) and (8)pixel_tAAnd Fpixel_tBAnd the label matrix T obtained in the step (9)pixel_label_AAnd Tpixel_label_BCarrying out Adaboost pixel classifier training to obtain a pixel classifier with optimal performance;
step (12): classifying the images by using the image classifier trained in the step (10), identifying bleeding pixels in the gastroscope images by using the pixel classifier obtained in the step (11), and removing the bleeding pixels with smaller area of the communication area;
step (13): and (4) displaying an image in the circular area according to the centroid coordinate and the area of the bleeding communication area, the centroid coordinate as the circle center and the area of the communication area as the circle area based on the detection result obtained in the step (12) so as to facilitate diagnosis of a doctor.
2. The method for intelligently identifying bleeding points in capsule gastroscope images based on Adaboost machine learning according to claim 1, characterized in that in the step (7) and the step (8),
the H, S, I channel data is calculated from RGB,
Figure FDA0002753366640000021
the calculation formula of the a-channel is as follows,
A=500(f(X/0.950456)-f(Y)) (5)
wherein the content of the first and second substances,
X=0.412453R+0.357580G+0.180423B
Y=0.212671R+0.715160G+0.072169B (6)
Figure FDA0002753366640000022
the M-channel data is calculated from RGB,
Figure FDA0002753366640000031
wherein the content of the first and second substances,
K=1-MAX(R,G,B) (8)。
3. the method for intelligently identifying bleeding points in capsule gastroscope images based on Adaboost machine learning according to claim 1, characterized in that in the steps (8) and (9), in the training process, the adjusted parameters are the number K of times of cycle training of Adaboost algorithm, a regularization term v and the maximum splitting point number S of CART decision tree; wherein S determines the performance strength of the weak classifier obtained by each training round, and K, v determines the performance of the strong classifier finally integrated.
4. The method for intelligently identifying bleeding points in capsule gastroscope images based on Adaboost machine learning according to claim 1, characterized in that the step (12) is specifically as follows:
sequentially identifying pixels in the gastroscope image by using the pixel classifier obtained in the step (11), obtaining a label of each identified pixel, obtaining a binary image according to the label and the pixel coordinate of each pixel, displaying white by using the label as a bleeding pixel, and displaying black by using the label as a non-bleeding pixel; and (3) calculating the number, the area size and the centroid of the white connected areas, setting a threshold value for the area, and correcting the white area with the excessively small area into black, namely correcting the bleeding label into a non-bleeding label to realize correction.
5. The method for intelligently identifying bleeding points in capsule gastroscope images based on Adaboost machine learning according to claim 4, wherein the step (13) is specifically as follows:
and (5) the white area in the binary image obtained in the step (12) is the identified bleeding point area, and for the area, a circular area which is 2 times the area of the white area is constructed by taking the centroid as the center of a circle, and a corresponding actual image in the circular area is displayed.
CN201810884973.4A 2018-08-06 2018-08-06 Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image Active CN109241963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810884973.4A CN109241963B (en) 2018-08-06 2018-08-06 Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810884973.4A CN109241963B (en) 2018-08-06 2018-08-06 Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image

Publications (2)

Publication Number Publication Date
CN109241963A CN109241963A (en) 2019-01-18
CN109241963B true CN109241963B (en) 2021-02-02

Family

ID=65069912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810884973.4A Active CN109241963B (en) 2018-08-06 2018-08-06 Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image

Country Status (1)

Country Link
CN (1) CN109241963B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110400308B (en) * 2019-07-30 2022-01-28 青岛海信医疗设备股份有限公司 Image display method, device, equipment and storage medium
CN111986196B (en) * 2020-09-08 2022-07-12 贵州工程应用技术学院 Automatic monitoring method and system for retention of gastrointestinal capsule endoscope
CN112819017B (en) * 2021-03-09 2022-08-16 遵义师范学院 High-precision color cast image identification method based on histogram
CN115184369B (en) * 2022-09-14 2023-01-03 北京奥乘智能技术有限公司 Capsule detection device, capsule detection method, image processing apparatus, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090080768A1 (en) * 2007-09-20 2009-03-26 Chung Shan Institute Of Science And Technology, Armaments Bureau, M.N.D. Recognition method for images by probing alimentary canals
JP5555097B2 (en) * 2010-08-24 2014-07-23 オリンパス株式会社 Image processing apparatus, method of operating image processing apparatus, and image processing program
US9743824B2 (en) * 2013-09-04 2017-08-29 Siemens Medical Solutions Usa, Inc. Accurate and efficient polyp detection in wireless capsule endoscopy images
CN106373137B (en) * 2016-08-24 2019-01-04 安翰光电技术(武汉)有限公司 Hemorrhage of digestive tract image detecting method for capsule endoscope
CN107292347A (en) * 2017-07-06 2017-10-24 中冶华天南京电气工程技术有限公司 A kind of capsule endoscope image-recognizing method

Also Published As

Publication number Publication date
CN109241963A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109241963B (en) Adaboost machine learning-based intelligent identification method for bleeding point in capsule gastroscope image
CN109859203B (en) Defect tooth image identification method based on deep learning
CN109472781B (en) Diabetic retinopathy detection system based on serial structure segmentation
CN106023151B (en) Tongue object detection method under a kind of open environment
CN107730489A (en) Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method
TWI696145B (en) Colonoscopy image computer-aided recognition system and method
CN107767365A (en) A kind of endoscopic images processing method and system
CN110189303B (en) NBI image processing method based on deep learning and image enhancement and application thereof
CN107049263A (en) Leucoderma condition-inference and cosmetic effect evaluating method and system based on image procossing
CN110826576B (en) Cervical lesion prediction system based on multi-mode feature level fusion
CN105913075A (en) Endoscopic image focus identification method based on pulse coupling nerve network
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
CN111223110B (en) Microscopic image enhancement method and device and computer equipment
Bourbakis Detecting abnormal patterns in WCE images
CN109635871A (en) A kind of capsule endoscope image classification method based on multi-feature fusion
CN105657580A (en) Capsule endoscopy video summary generation method
CN110495888B (en) Standard color card based on tongue and face images of traditional Chinese medicine and application thereof
CN109242792B (en) White balance correction method based on white object
CN111062936B (en) Quantitative index evaluation method for facial deformation diagnosis and treatment effect
Chen et al. Ulcer detection in wireless capsule endoscopy video
CN114332910A (en) Human body part segmentation method for similar feature calculation of far infrared image
CN109636864A (en) A kind of tongue dividing method and system based on color correction Yu depth convolutional neural networks
CN112464871A (en) Deep learning-based traditional Chinese medicine tongue image processing method and system
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network
CN116109818A (en) Traditional Chinese medicine pulse condition distinguishing system, method and device based on facial video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant