CN116434920A - Gastrointestinal epithelial metaplasia progression risk prediction method and device - Google Patents
Gastrointestinal epithelial metaplasia progression risk prediction method and device Download PDFInfo
- Publication number
- CN116434920A CN116434920A CN202310639573.8A CN202310639573A CN116434920A CN 116434920 A CN116434920 A CN 116434920A CN 202310639573 A CN202310639573 A CN 202310639573A CN 116434920 A CN116434920 A CN 116434920A
- Authority
- CN
- China
- Prior art keywords
- image
- marker
- attribute
- segmentation
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 206010054949 Metaplasia Diseases 0.000 title claims abstract description 41
- 230000002496 gastric effect Effects 0.000 title claims abstract description 36
- 230000015689 metaplastic ossification Effects 0.000 title claims abstract description 32
- 230000011218 segmentation Effects 0.000 claims abstract description 187
- 239000003550 marker Substances 0.000 claims abstract description 171
- 208000004300 Atrophic Gastritis Diseases 0.000 claims abstract description 99
- 208000036495 Gastritis atrophic Diseases 0.000 claims abstract description 99
- 208000016644 chronic atrophic gastritis Diseases 0.000 claims abstract description 99
- 238000013139 quantization Methods 0.000 claims abstract description 93
- 230000000968 intestinal effect Effects 0.000 claims abstract description 61
- 210000004204 blood vessel Anatomy 0.000 claims abstract description 54
- 238000011002 quantification Methods 0.000 claims abstract description 23
- 238000010801 machine learning Methods 0.000 claims description 54
- 238000000605 extraction Methods 0.000 claims description 50
- 210000002784 stomach Anatomy 0.000 claims description 20
- 239000003086 colorant Substances 0.000 claims description 9
- 230000000877 morphologic effect Effects 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000004043 dyeing Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 230000007797 corrosion Effects 0.000 claims description 4
- 238000005260 corrosion Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 12
- 230000008569 process Effects 0.000 description 12
- 238000013527 convolutional neural network Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 7
- 210000001035 gastrointestinal tract Anatomy 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000003066 decision tree Methods 0.000 description 6
- 210000001156 gastric mucosa Anatomy 0.000 description 6
- 238000007637 random forest analysis Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 230000004927 fusion Effects 0.000 description 5
- 238000003709 image segmentation Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 238000007477 logistic regression Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 206010061218 Inflammation Diseases 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 206010017758 gastric cancer Diseases 0.000 description 2
- 230000004054 inflammatory process Effects 0.000 description 2
- 208000005718 Stomach Neoplasms Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 210000004347 intestinal mucosa Anatomy 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000004877 mucosa Anatomy 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 201000011549 stomach cancer Diseases 0.000 description 1
- 230000003746 surface roughness Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Public Health (AREA)
- Epidemiology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Primary Health Care (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a gastrointestinal epithelial metaplasia progression risk prediction method and a device, and relates to the technical field of image processing, wherein the method comprises the following steps: performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image; the first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute; the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute; the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute. The invention fully considers the influence of the characteristic quantization values of a plurality of different attributes of the gastroscope image on the accuracy and intuitiveness of image processing, and effectively improves the intestinal risk level identification efficiency and identification accuracy.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a gastrointestinal epithelial metaplasia progression risk prediction method and device.
Background
Intestinal metaplasia is the major precancerous disease of intestinal gastric cancer. Multicentric investigation finds that the incidence rate of gastrointestinal epithelial metaplasia in China is about 23.6%, and the cardinality is huge. At present, the intestinal metaplasia patient is regularly reviewed according to the guideline in clinic. However, only 0.2% of intestinal metaplasia patients have an opportunity to progress to gastric cancer, the rest of the patients remain stable and even improve, but there is no efficient way to distinguish between high or low risk of progression intestinal metaplasia. On the one hand, the doctor and the patient pay little attention to intestinal epithelial metaplasia, and the normal review rate is less than 50%, so that the illness state of part of patients with high progression risk is delayed; on the other hand, part of patients with low progression risk receive long-term and repeated reexamination, so that certain medical resource waste is brought.
Disclosure of Invention
Aiming at the problems, the first party of the invention provides a gastrointestinal epithelial metaplasia progression risk prediction method which can effectively improve intestinal metaplasia risk level identification efficiency and identification accuracy.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a method of predicting risk of progression of gastrointestinal epithelial metaplasia, comprising the steps of:
performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image;
The first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute;
the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute;
the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute.
In some embodiments, the classifying the gastroscopic image enteron grade by acquiring the feature quantification value of the tag attribute of the marker segmented image, the atrophic gastritis segmented image and the enteron segmented image includes the following steps:
obtaining a gastroscope image, and carrying out marker segmentation on the gastroscope image to obtain a marker segmentation image;
extracting the features of the first tag attributes from the marker segmented image, obtaining a first feature quantized value corresponding to the first tag attributes, inputting the first feature quantized value into a trained machine learning classifier for classification, and obtaining a classification result of the gastroscopic image atrophic gastritis;
extracting features of second tag attributes from the atrophic gastritis segmented image, obtaining second feature quantized values corresponding to the second tag attributes, inputting the second feature quantized values into a trained machine learning classifier for classification, and obtaining gastroscope image enteric classification results;
And carrying out feature extraction of a third tag attribute on the enteron segmented image, obtaining a third feature quantized value corresponding to the third tag attribute, inputting the third feature quantized value into a trained machine learning classifier for classification, and obtaining a gastroscope image enteron risk level classification result.
In some embodiments, feature extraction of a first tag attribute is performed on the marker-segmented image, and a first feature quantization value corresponding to the first tag attribute is obtained, including the following steps:
acquiring a blood vessel segmentation image in the marker segmentation image;
according to the formula:
obtaining a blood vessel characteristic quantification value label 1 Wherein n is v Is the number of blood vessels in the blood vessel segmentation image, s v The blood vessel area in the blood vessel segmentation image is that of the marker segmentation image;
according to the formula:
obtaining a fold characteristic quantization value label 2 Wherein s is z Is the fold area, n, in the marker region in the marker-segmented image z Is the number of folds in the marker region in the marker-segmented image, S is the markerObject segmentation image area s f Is the fold area, n, in the non-marker region in the marker-segmented image f Is the number of folds in the non-marker region in the marker-segmented image;
according to the formula:
Obtaining a color characteristic quantization value label 3 Wherein r is c ,g c ,b c Is the three-channel average color value, S is the area of the marker region in the marker-partitioned image, n c The average number is calculated for the rest list after the colors close to black are removed;
according to the formula:
obtaining diffuse characteristic quantized value label 4 Wherein M is the number of images taken of the entire stomach wall, w Qi ,h Qi Is the width and height of each image, S i The area from each image to the marker is divided by the marker and is based on the connected domain, w bi ,h bi Is the width and height of the marker, (x) bi ,y bi ) Is the centroid coordinates of the marker.
In some embodiments, the vessel segmentation image acquiring step includes:
converting the marker segmentation image into a gray level image and then binarizing the gray level image;
performing corrosion operation on the binarized marker segmentation image to obtain a marker segmentation image mask image;
carrying out median filtering on the gray level image of the marker segmentation image;
performing histogram equalization on the obtained marker segmentation image after median filtering;
performing gamma conversion on the marker segmentation image subjected to histogram equalization;
carrying out convolution operation on the obtained marker segmentation image after gamma conversion;
Bit-by-bit comparison is carried out on the convolved marker segmentation image and the marker segmentation image mask image, if a pixel value at a certain position in the mask image is 0, the pixel value of the convolved marker segmentation image at the certain position is 0, and a denoised marker segmentation image is obtained;
and carrying out contrast stretching on the denoised marker segmentation image to obtain a blood vessel segmentation image corresponding to the marker segmentation image.
In some embodiments, feature extraction of a second tag attribute is performed on the atrophic gastritis segmented image, and a second feature quantization value corresponding to the second tag attribute is obtained, including the following steps:
according to the formula:
obtaining a roughness characteristic quantization value label 5 Wherein W is W ,H W Width and height of divided images of atrophic gastritis, P mean Is the average pixel value of the divided images of atrophic gastritis, img W Segmenting an image for atrophic gastritis, W 0 Binarizing the atrophic gastritis segmentation map according to a certain set threshold to obtain a maximum variance row, wherein W is greater than 0 0 <W W ;
According to the formula:
obtaining a villus sample characteristic quantization value label 6 Wherein N is r Is the number of villus-like regions, s ri Is the area of the villus-like region, w ri ,h ri Is the width and height of the minimum circumscribing rectangle of the villus-like region, n ri Is the corner point of the fluff-like area and is broken at the corner point to obtain the number of fluff sections;
according to the formula:
obtaining a bias white characteristic quantized value label 7 Wherein n is b Is the maximum number of class pixel points, (r) bi ,g bi ,b bi ) Pixel values of each pixel point of the category having the largest number of colors;
according to the formula:
obtaining a bright blue ridge characteristic quantized value label 8 Wherein W is L ,H L Is the width and height of the bright blue ridge segmented image, img L Is to segment the image for bright blue ridges, (x) RSL ,y RSL ) Is the centroid coordinate of the bright blue ridge segmented image, S RSL Is the bright blue ridge dividing image area, (x) RS ,y RS ) Is the centroid coordinates of the segmentation image of the dyeing and enlarging atrophic gastritis, S RS Is to dye and enlarge the area of the atrophic gastritis area,is the standard bright blue ridge three channel average pixel value, PL is the bright blue ridge split image average pixel value.
In some embodiments, feature extraction of a third tag attribute is performed on the intestinal segmented image, and a third feature quantization value corresponding to the third tag attribute is obtained, including the following steps:
according to the formula:
obtaining a position characteristic quantized value label9, wherein S C Is the intestinal divided image area, [ (x) FDX ,y FDX ),(x FDD ,y FDD ),(x FJ ,y FJ ),(x FTX ,y FTX ),(x FTD ,y FTD )]Is the centroid coordinate of the stomach risk part segmentation image [ S ] FDX ,S FDD ,S FJ ,S FTX ,S FTD ]Is the area of the risk site, list d =[d FDX ,d FDD ,d FJ ,d FTX ,d FTD ]The distance between the intestinal segmented image and each risk part;
According to the formula:
obtaining morphological feature quantification value label10, wherein W C ,H C The width and height of the smallest circumscribed rectangle of the intestinal segmented image are respectively, (x) C ,y C ) Is centroid coordinate of intestinal segmented image, r C Is the minimum circumcircle radius of the intestinal segmented image, n f Is the number of non-enteric segmented regions within the minimum circumscribing circle, s fi Area of non-enteric segmented region, (x) fi ,y fi ) Is the smallest circumscribing circular center of the intestinal segmented image.
In some embodiments, the third feature quantization value is input into a trained machine learning classifier to classify, so as to obtain a gastroscope image intestinal risk level classification result, wherein the classifier comprises a feature fitting sub-network and a classification sub-network;
the obtained gastroscope image enteric risk level classification result comprises the following steps:
fitting the characteristic quantized values of the tag attributes by adopting the characteristic fitting sub-network to obtain a judgment coefficient;
and based on the judging coefficient, analyzing by adopting the classifying sub-network to obtain the identification result.
Aiming at the problems, the second party of the invention provides a gastrointestinal epithelial metaplasia progress risk prediction device which can effectively improve intestinal metaplasia risk level identification efficiency and identification accuracy.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a gastrointestinal epithelial metaplasia progression risk prediction device for:
performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image;
the first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute;
the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute;
the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute.
In some embodiments, the method comprises:
the acquisition module is used for acquiring gastroscope images;
the segmentation module is used for carrying out marker segmentation on the gastroscope image to obtain a marker segmentation image;
the feature extraction module is used for extracting features of the first tag attributes from the marker segmentation image and obtaining first feature quantized values corresponding to the first tag attributes;
the feature extraction module is further used for extracting features of second tag attributes from the atrophic gastritis segmented image, obtaining second feature quantized values corresponding to the second tag attributes, extracting features of third tag attributes from the enteric segmented image, and obtaining third feature quantized values corresponding to the third tag attributes;
The generation module is used for inputting the first characteristic quantized value into a trained machine learning classifier to classify, obtaining a gastroscope image atrophic gastritis classification result, and inputting the second characteristic quantized value into the trained machine learning classifier to classify, obtaining a gastroscope image enteron classification result; and inputting the third characteristic quantization value into a trained machine learning classifier for classification to obtain a gastroscope image intestinal risk level classification result.
In some embodiments, the feature extraction module is to:
feature extraction of a first tag attribute is performed on the marker segmentation image, and a first feature quantization value corresponding to the first tag attribute is obtained, and the method comprises the following steps:
feature extraction of a first tag attribute is performed on the marker segmentation image, and a first feature quantization value corresponding to the first tag attribute is obtained, and the method comprises the following steps:
acquiring a blood vessel segmentation image in the marker segmentation image;
according to the formula:
obtaining a blood vessel characteristic quantification value label 1 Wherein n is v Is the number of blood vessels in the blood vessel segmentation image, s v The blood vessel area in the blood vessel segmentation image is that of the marker segmentation image;
according to the formula:
obtaining a fold characteristic quantization value label 2 Wherein s is z Is the fold area, n, in the marker region in the marker-segmented image z Is the number of folds in the marker region in the marker-segmented image, S is the marker-segmented image area, S f Is the fold area, n, in the non-marker region in the marker-segmented image f Is the number of folds in the non-marker region in the marker-segmented image;
according to the formula:
obtaining a color characteristic quantization value label 3 Wherein r is c ,g c ,b c Is the three-channel average color value, S is the area of the marker region in the marker-partitioned image, n c The average number is calculated for the rest list after the colors close to black are removed;
according to the formula:
obtaining diffuse characteristic quantized value label 4 Wherein M is the number of images taken of the entire stomach wall, w Qi ,h Qi Is the width and height of each image, S i The area from each image to the marker is divided by the marker and is based on the connected domain, w bi ,h bi Is the width and height of the marker, (x) bi ,y bi ) Is the centroid coordinates of the marker.
The method for predicting the risk of the development of the gastrointestinal epithelial metaplasia comprises the following steps: performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image; the first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute; the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute; the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute. The invention fully considers the influence of the characteristic quantization values of a plurality of different attributes of the gastroscope image on the accuracy and intuitiveness of image processing, and effectively improves the intestinal risk level identification efficiency and identification accuracy.
Drawings
FIG. 1 is a flow chart of a method for predicting risk of progression of gastrointestinal epithelial metaplasia according to an embodiment of the invention;
FIG. 2 is a graph showing the effect of marker-segment images of a method for predicting risk of progression of gastrointestinal epithelial metaplasia in an embodiment of the invention;
FIG. 3 is a diagram showing non-atrophic gastritis/atrophic gastritis of a method for predicting risk of progression of gastrointestinal epithelial metaplasia in an embodiment of the present invention;
FIG. 4 is a diagram showing atrophic gastritis/intestinal metaplasia of a method for predicting risk of progression of gastrointestinal epithelial metaplasia in an embodiment of the present invention;
FIG. 5 is a graph showing the effect of blood vessel extraction in a method for predicting risk of progression of gastrointestinal epithelial metaplasia in accordance with an embodiment of the present invention;
FIG. 6 is a graph showing the effect of color principal component extraction in a method for predicting risk of progression of gastrointestinal epithelial metaplasia according to an embodiment of the present invention;
FIG. 7 is a graph showing the effects of villous segmentation on the risk prediction method for progression of gastrointestinal epithelia in an embodiment of the present invention;
fig. 8 is a diagram showing a bright blue ridge of a method for predicting risk of progression of gastrointestinal epithelial metaplasia in an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
Referring to fig. 1, a first aspect of the present invention provides a method for predicting the risk of progression of gastrointestinal epithelial metaplasia, comprising the steps of:
s1, obtaining a gastroscope image, and performing marker segmentation on the gastroscope image to obtain a marker segmentation image;
the gastroscope image can be a white light image, a dyed image or a dyed amplified image, and is an electronic gastroscope image of the stomach shot by an electronic endoscope. Specifically, the gastroscope image may be acquired through gastroscope, or may be acquired from an image library stored in advance in a memory of the computer device.
And carrying out marker segmentation on the gastroscope image to obtain a marker segmentation image, wherein the marker segmentation image is shown in fig. 2.
Specifically, the gastroscope image and the marker region are taken as sample images, a marker segmentation model is trained in advance, for example, image segmentation network models such as the Unet++, mask-RCNN and the like are selected, in a specific embodiment, the gastroscope image is taken as input of the trained segmentation model, and the output result of the segmentation model is the marker segmentation image. It can be understood that in this embodiment, by obtaining the marker segmentation map, a plurality of tag attribute feature quantization is performed on the marker segmentation image, so as to improve the accuracy of quantization.
S2, extracting features of first tag attributes from the marker segmented image, obtaining first feature quantized values corresponding to the first tag attributes, inputting the first feature quantized values into a trained machine learning classifier for classification, and obtaining a classification result of the gastroscopic image atrophic gastritis;
and extracting the features of the first tag attributes from the marker-segmented image to obtain first feature quantized values corresponding to the first tag attributes, wherein the first tag attributes are a plurality of attributes of the marker-segmented image, such as blood vessel attributes, fold attributes, color attributes, diffuse attributes and the like of the marker-segmented image, and the first feature quantized values are quantized values corresponding to the features of the first tag attributes.
Specifically, a feature extraction method is adopted to perform feature extraction on the target area image to obtain a first feature quantized value, wherein the feature extraction method can be a first feature quantized value obtained by combining an artificial feature extraction method with an algorithm based on image feature analysis, such as pixel neighborhood mean value calculation, maximum pixel value extraction and the like, and can also be a feature extraction method for deep learning, such as convolutional neural network CNN, unet++, and the like, and the feature extraction method can be specifically selected according to the feature of the first tag attribute, and is not limited herein. In this embodiment, by extracting features of the marker-divided image to obtain the corresponding first feature quantized values, quantization calculation of features of each first tag attribute of the marker-divided image is achieved, so that the feature quantized values are more comprehensive and rich, accurate and visual image analysis and recognition can be performed based on the multiple first feature quantized values, and accuracy of recognition of atrophic gastritis is improved.
In some embodiments, each first feature quantization value is input into a trained machine learning classifier to classify, and a classification result of atrophic gastritis is obtained. The trained machine learning classifier can be realized by a machine learning algorithm model with a classification capability through sample learning, and the machine learning classifier of the embodiment is used for classifying different first characteristic value sets into one of non-atrophic gastritis or atrophic gastritis results. In particular, a classifier that classifies using at least one machine learning model may be utilized. The machine learning model may be one or more of the following: neural networks (e.g., convolutional neural networks, BP neural networks, etc.), logistic regression models, support vector machines, decision trees, random forests, perceptrons, and other machine learning models. As part of the training of such a machine learning model, the training input is respective first feature quantized values, for example, blood vessel attribute, fold attribute, color attribute, diffuse attribute, and the like, and by training, a classifier of the correspondence between the first feature value set and the gastroscopic image atrophic gastritis to be identified is established so that the preset classifier has the ability to judge whether the classification result corresponding to the gastroscopic image to be identified is non-atrophic gastritis or atrophic gastritis result. In this embodiment, the classifier is a classifier, and 2 classification results, that is, a non-atrophic gastritis result or a atrophic gastritis result, are obtained, as shown in fig. 3. It can be understood that in this embodiment, the feature quantization values of a plurality of different attributes of the atrophic gastritis segmented image and the influence of the feature quantization values of a plurality of different attributes of the atrophic gastritis segmented image on the accuracy and intuitiveness of image processing are fully considered, and by extracting the features with more abundant information and performing quantization and comprehensive processing on the features of a plurality of different attributes, the rationality of feature value quantization is improved, and compared with the traditional method that only single feature information and single statistical comparison method are considered, the identification efficiency of atrophic gastritis is greatly improved.
S3, extracting features of second tag attributes from the atrophic gastritis segmented image, obtaining second feature quantized values corresponding to the second tag attributes, inputting the second feature quantized values into a trained machine learning classifier for classification, and obtaining gastroscope image enteric classification results;
in some embodiments, feature extraction of the plurality of second tag attributes is performed on the segmented atrophic gastritis image, and second feature quantized values corresponding to the plurality of second tag attributes are obtained. The second tag attributes refer to a plurality of attributes of the atrophic gastritis segmented image, for example, a roughness attribute, a villous attribute, a white bias attribute, a bright blue ridge attribute, and the like of the atrophic gastritis segmented image, and the second feature quantization value refers to a quantization value corresponding to a feature of each second tag attribute.
Specifically, a feature extraction method is adopted to perform feature extraction on the target area image to obtain a second feature quantized value, wherein the feature extraction method can be a method for performing computation to obtain the second feature quantized value by combining an artificial feature extraction method with an algorithm based on image feature analysis, such as pixel neighborhood mean value computation, maximum pixel value extraction and the like, or can be a feature extraction method for deep learning, such as convolutional neural network CNN, unet++, and the like, and can be specifically selected according to the features of the second label attribute, and is not limited herein. In this embodiment, by extracting features of the segmented image of atrophic gastritis to obtain corresponding second feature quantized values, quantization calculation of features of each second label attribute of the segmented image of atrophic gastritis is achieved, so that the feature quantized values are more comprehensive and rich, accurate and visual image analysis and recognition can be performed based on the multiple second feature quantized values, and accuracy of intestinal recognition is improved.
In some embodiments, each second feature quantization value is input into a trained machine learning classifier for classification, resulting in a classification result of intestinal tract. The trained machine learning classifier can be implemented by a machine learning algorithm model with a sample learning capability, and the machine learning classifier in the embodiment is used for classifying the second different characteristic value set into one of non-enteric or enteric results. In particular, a classifier that classifies using at least one machine learning model may be utilized. The machine learning model may be one or more of the following: neural networks (e.g., convolutional neural networks, BP neural networks, etc.), logistic regression models, support vector machines, decision trees, random forests, perceptrons, and other machine learning models. As part of the training of such a machine learning model, the training input is respective second feature quantized values, for example, a roughness attribute, a fluff-like attribute, a bias white attribute, a bright blue ridge attribute, and the like, and by training, a classifier of the enterozation correspondence relationship between the second feature value set and the gastroscopic image to be identified is established so that the preset classifier has the capability of judging whether the classification result corresponding to the gastroscopic image to be identified is a non-enterozation or enterozation result. In this embodiment, the classifier is a classifier, i.e. 2 classification results, i.e. non-enteric or enteric results, are obtained, as shown in fig. 4. It can be understood that in this embodiment, the feature quantization values of a plurality of different attributes of the atrophic gastritis segmented image and the influence of the feature quantization values of a plurality of different attributes of the atrophic gastritis segmented image on the accuracy and intuitiveness of image processing are fully considered, and by extracting the features with more abundant information and performing quantization and comprehensive processing on the features of a plurality of different attributes, the rationality of feature value quantization is improved, and compared with the traditional method that only single feature information and single statistical comparison method are considered, the intestinal tract recognition efficiency is greatly improved.
S4, extracting features of third tag attributes from the enteron segmented image, obtaining a third feature quantized value corresponding to the third tag attributes, inputting the third feature quantized value into a trained machine learning classifier for classification, and obtaining a gastroscope image enteron risk level classification result.
In some embodiments, feature extraction of a plurality of third tag attributes is performed on the enteric segmented image, and third feature quantized values corresponding to the plurality of third tag attributes are obtained.
The third tag attributes refer to a plurality of attributes of the segmented image, for example, a position attribute, a shape attribute, and the like of the segmented image, and the third feature quantization value refers to a quantization value corresponding to a feature of each third tag attribute.
Specifically, a feature extraction method is adopted to perform feature extraction on the target area image to obtain a third feature quantized value, wherein the feature extraction method can be a third feature quantized value obtained by combining an artificial feature extraction method with an algorithm based on image feature analysis, such as pixel neighborhood mean value calculation, maximum pixel value extraction and the like, or can be a feature extraction method for deep learning, such as convolutional neural network CNN, unet++, and the like, and can be specifically selected according to the features of the third tag attribute, which is not limited herein. In this embodiment, by extracting features of the enteric segmented image, a corresponding third feature quantization value is obtained, so that feature quantization values are more comprehensive and abundant, and accurate and visual image analysis and recognition are performed based on the multiple third feature quantization values, thereby improving accuracy of identifying enteric risk levels.
And inputting each third characteristic quantized value into a trained machine learning classifier to classify, so as to obtain an intestinal classification result.
The trained machine learning classifier can be implemented by a machine learning algorithm model with a classification capability through sample learning, and the machine learning classifier in the embodiment is used for classifying different characteristic value sets into one of low-risk or high-risk results. In particular, a classifier that classifies using at least one machine learning model may be utilized. The machine learning model may be one or more of the following: neural networks (e.g., convolutional neural networks, BP neural networks, etc.), logistic regression models, support vector machines, decision trees, random forests, perceptrons, and other machine learning models. As part of the training of such a machine learning model, the training input is respective third feature quantized values, for example, position attribute, morphological attribute, and the like, and by training, a classifier of the correspondence between the third feature value set and the gastroscopic image intestinal risk level to be identified is established, so that the preset classifier has the capability of judging whether the classification result corresponding to the gastroscopic image to be identified is a low risk or high risk result. In this embodiment, the classifier is a classifier, that is, 2 classification results, that is, low-risk or high-risk results, are obtained. It can be appreciated that in this embodiment, the feature quantization values of a plurality of different attributes of the enteric segmented image and the influence of the feature quantization values of a plurality of different attributes of the enteric segmented image on the accuracy and intuitiveness of image processing are fully considered, and by extracting the features with more abundant information and performing quantization and comprehensive processing on the features of a plurality of different attributes, the rationality of feature value quantization is improved, and compared with the traditional method that only single feature information and single statistical comparison method are considered, the enteric risk level recognition efficiency is greatly improved.
The above embodiment provides a gastrointestinal epithelial metaplasia progression risk prediction method, firstly, obtaining and carrying out marker segmentation according to a gastroscope image to be identified to obtain a marker segmentation image, secondly, obtaining a plurality of first characteristic quantized values corresponding to the marker segmentation image, secondly, inputting the plurality of first characteristic quantized values into a trained machine learning classifier to obtain a atrophic gastritis identification result, thirdly, obtaining a plurality of second characteristic quantized values of the atrophic gastritis segmentation image into the trained machine learning classifier to obtain an intestinal tract identification result, thirdly, obtaining a plurality of third characteristic quantized values of the intestinal tract segmentation image, and thirdly, inputting the plurality of third characteristic quantized values into the trained machine learning classifier to obtain an intestinal tract risk level identification result; according to the embodiment, the influence of the feature quantification values of a plurality of different attributes of the gastroscope image on the accuracy and intuitiveness of image processing is fully considered, and the intestinal risk level identification efficiency and identification accuracy are effectively improved.
In one embodiment, the plurality of first tag attributes includes a blood vessel attribute, a fold attribute, a color attribute, a diffuse attribute; the step of extracting the features of the plurality of first tag attributes from the marker segmented image to obtain the first feature quantization value corresponding to each first tag attribute comprises the following steps: determining a blood vessel quantification result of the marker segmentation image by adopting a preset blood vessel quantification method; determining a fold quantization result of the marker segmentation image by adopting a preset fold quantization method; determining a color quantization result of the marker segmentation image by adopting a preset color quantization method; and determining a diffuse quantization result of the marker segmentation image by adopting a preset diffuse quantization method.
Wherein, the normal gastric mucosal surface can hardly observe micro blood vessels under white light, but when diseases such as inflammation and the like appear on the gastric mucosal surface, the submucosal blood vessels can be clearly visualized.
Specifically, the step of acquiring the blood vessel in the marker-segmented image includes:
converting the marker segmentation image into a gray level image and then binarizing, wherein the binarization threshold value is tau, tau epsilon (0, 255) can be determined according to actual conditions, and the method is not particularly limited;
invoking opencv tool kit cv2.erode (), and performing corrosion operation on the binarized marker segmentation image to obtain a marker segmentation image mask image;
invoking opencv tool kit cv2.medianlur (), and carrying out median filtering on the gray level images of the marker segmentation images;
calling opencv tool kit cv2.CreateCLAHE (), and performing histogram equalization on the median filtered marker segmentation image;
performing gamma transformation on the marker segmentation image subjected to histogram equalization, wherein a gamma transformation formula is O (x, y) =I (x, y) γ Wherein, O (x, y) is a graph after gamma conversion, I (x, y) is an original graph, and γ=0.5 in the invention;
calling opencv tool package cv2.file2D (), and performing convolution operation on the marker segmentation image after gamma transformation;
Bit-by-bit comparison is carried out on the convolved marker segmentation image and the marker segmentation image mask image, if a pixel value at a certain position in the mask image is 0, the pixel value of the convolved marker segmentation image at the certain position is 0, and a denoised marker segmentation image is obtained;
and carrying out contrast stretching on the denoised marker segmentation image to obtain a blood vessel segmentation image corresponding to the marker segmentation image, as shown in fig. 5.
Specifically, the vessel segmentation image is acquired on the basis of the connected domainThe area of the medium blood vessel is s v And number of blood vessels n v The vessel quantification value isAnd S is the area of the marker segmented image, and the marker is obtained during segmentation.
Wherein, normal gastric mucosa has folds, but when diseases such as inflammation appear on the gastric mucosa surface, the folds of the mucosa can be flattened or even disappear.
Specifically, a trained fold segmentation model is adopted to segment folds in a gastroscope image, wherein the fold segmentation model can be a network model for image segmentation such as a Unet++, mask-RCNN and the like, and the fold area in a marker region is calculated to be s on the basis of a connected domain z The number of folds is n z The area of the marker region is S, and the area of the folds in the non-marker region is S f The number of folds is n f The fold quantization value is
Wherein, the gastric mucosa is reddish or white or the red and white phases indicate that the gastric mucosa is abnormal and the probability of existence of atrophic gastritis is great.
Specifically, the color principal component extraction is performed on the marker-segmented image by the PIL self-contained getcolor () method, as shown in fig. 6, and in one embodiment, the color principal component extraction is performed on the marker-segmented image by outputting a color-number correspondence list [ (14757, (193,87,73)), (12541, (176,85,69)), (25072, (175,65,58)), (11435, (211,110,91)), (16153, (195,96,79)), (15781, (179,76,64)), (3488, (116,48,42)), (4308, (5,0,2)), (1920, (0, 3)), (156689, (0, 0))]Reject color near black (color r ≤20,color g ≤20,color b Calculating the average number n of the rest list after less than or equal to 20) c Three channels with average color r c ,g c ,b c The color quantization value isWherein S isMarker region area.
Among these, atrophic gastritis spreads across the gastric mucosal surface, and over time may spread across the stomach inner surface.
Specifically, the whole stomach inner wall is subjected to image acquisition to obtain M images, and the width and the height of each image are respectively w Qi ,h Qi Dividing each image into markers, wherein the area from each image to the marker on the basis of the connected domain is S i The width and the height of the marker are w respectively bi ,h bi The centroid coordinates of the markers are (x bi ,y bi ) The diffusion degree quantization value is
In one embodiment, the trained machine learning classifier includes a feature fitting sub-network and a classification sub-network; inputting each first characteristic quantized value into a trained machine learning classifier for classification, and obtaining a gastroscope image atrophic gastritis classification result, wherein the method comprises the following steps of: fitting each first characteristic quantized value by adopting a characteristic fitting sub-network to obtain a judgment coefficient; based on the judgment coefficient, a classification sub-network is adopted for analysis, and a classification result is obtained.
Specifically, the fitting process is performed by performing the fitting process on each first feature quantized value of the feature fitting sub-network, and the corresponding weight of the fitting process is performed on each first feature quantized value according to the fitting result, so that the first feature quantized value label in the above embodiment is continued 1 ~label 4 For example, label is determined using decision trees, random forests, etc 1 ~label 4 The corresponding weights are respectively lambda 1 ,λ 2 ,λ 3 ,λ 4 Then the fusion eigenvalue is:the classification result is the result of non-atrophic gastritis and atrophic gastritis.
In this embodiment, by performing fusion calculation on each first feature quantization value, the information features of the gastroscope image are richer, quantization is more accurate, and identification accuracy and identification efficiency of the atrophic gastritis of the gastroscope image are improved.
In one embodiment, the plurality of second tag attributes includes a roughness attribute, a nap-like attribute, a off-white attribute, a bright blue ridge attribute; the step of extracting the characteristics of the plurality of second tag attributes from the atrophic gastritis segmented image to obtain second characteristic quantization values corresponding to the second tag attributes comprises the following steps: determining a roughness characteristic quantization result of the segmented image of the atrophic gastritis by adopting a preset roughness characteristic quantization method; determining a villus characteristic quantization result of the atrophic gastritis segmented image by adopting a preset villus characteristic quantization method; determining a partial white quantization result of the atrophic gastritis segmented image by adopting a preset partial white quantization method; and determining the bright blue ridge quantization result of the atrophic gastritis segmented image by adopting a preset bright blue ridge quantization method.
Among them, the atrophic gastritis progresses to intestinal transformation, and the gastric mucosal surface becomes progressively rough.
Specifically, the surface roughness feature quantization value isWherein W is W ,H W Dividing the width and height of the image for atrophic gastritis, P mean The average pixel value of the image is segmented for atrophic gastritis,img W segmenting an image for atrophic gastritis, W 0 Binarizing the atrophic gastritis segmentation map according to a certain set threshold to obtain a maximum variance row, wherein W is greater than 0 0 <W W 。
Wherein the intestinal mucosa produced a fluff-like pattern as shown in fig. 7.
Specifically, a trained villus sample segmentation model is adopted to segment villus samples in the atrophic gastritis segmentation image, wherein the villus sample segmentation model can be a network model for image segmentation such as Unet++, mask-RCNN and the like, and is connected with the modelIndependently extracting the villus-like region on the basis of the through domain and obtaining the area s of the villus-like region ri Number of villus-like regions N r Width and height w of minimum bounding rectangle with villus-like region ri ,h ri Then, corner detection is adopted to obtain the corner of the fluff-like area, and the corner is broken to obtain a plurality of fluff sections, wherein the number of the fluff sections is n ri The fuzz-like characteristic quantization value is
Wherein, the gastric mucosa after intestinal transformation is white in color.
Specifically, the colors in the segmented images of atrophic gastritis are clustered by a color clustering method, and in a specific embodiment, a k-means clustering method may be used, where, without specific limitation, the pixel values (r bi ,g bi ,b bi ) Maximum class pixel point number n b The off-white quantization value is
In this case, the gastric mucosa after intestinal tract formation may have a bright blue ridge phenomenon in a stained and enlarged state, as shown in fig. 8.
Specifically, the gastroscopic enlarged image is subjected to atrophic gastritis segmentation to obtain a dyeing enlarged atrophic gastritis segmented image and a dyeing enlarged atrophic gastritis area S RS And dyeing to enlarge the centroid (x) RS ,y RS ) Dividing a villus sample in the atrophic gastritis divided image by adopting a trained bright blue ridge dividing model to obtain a bright blue ridge divided image and a bright blue ridge divided image area S RSL And bright blue ridge segmentation image centroid (x RSL ,y RSL ) The bright blue ridge segmentation model can be image segmentation network models such as Unet++, mask-RCNN and the like, and an average pixel value of a bright blue ridge segmentation image is obtainedW L ,H L Image width and height, img, of bright blue ridge segmentation image respectively L Dividing the image for bright blue ridges, the bright blue ridges quantized value is +.>Wherein W is W ,H W Width and height of minimum circumscribed rectangle of divided image of atrophic gastritis respectively, < >>For a standard bright blue ridge three channel mean pixel value, in one specific embodiment +.>
In one embodiment, the trained machine learning classifier includes a feature fitting sub-network and a classification sub-network; inputting each second characteristic quantized value into a trained machine learning classifier for classification, and obtaining a gastroscope image intestinal classification result, wherein the step comprises the following steps: fitting each second characteristic quantized value by adopting a characteristic fitting sub-network to obtain a judgment coefficient; based on the judgment coefficient, a classification sub-network is adopted for analysis, and a classification result is obtained.
Specifically, the fitting process is performed by performing the fitting process on each second feature quantized value of the feature fitting sub-network, and the corresponding weight of the fitting process is performed on each second feature quantized value according to the fitting result, so that the second feature quantized value label in the above embodiment is continued 5 ~label 8 For example, label is determined using decision trees, random forests, etc 5 ~label 8 The corresponding weights are respectively lambda 5 ,λ 6 ,λ 7 ,λ 8 Then the fusion eigenvalue is:the classification result is the result of non-enteric formation and enteric formation.
In this embodiment, by performing fusion calculation on each second characteristic quantization value, the information characteristics of the gastroscope image are richer, quantization is more accurate, and recognition accuracy and recognition efficiency of gastroscope image intestinal formation are improved.
In one embodiment, the plurality of third tag attributes includes a location attribute, a morphology attribute; the step of extracting the characteristics of the plurality of third tag attributes from the atrophic gastritis segmented image to obtain third characteristic quantization values corresponding to the third tag attributes comprises the following steps: determining the position quantization result of the intestinal segmented image by adopting a preset position quantization method; and determining the morphological quantization result of the intestinal segmented image by adopting a preset morphological quantization method.
Wherein, the risk factor is greater when intestinal metaplasia occurs in the stomach Dou Xiaowan, stomach Dou Dawan, corner of stomach, lesser curvature of the stomach, greater curvature of the stomach.
Specifically, a trained stomach risk part segmentation model is adopted to segment a stomach risk part in a gastroscope image and obtain a centroid [ (x) of the stomach risk part segmentation image FDX ,y FDX ),(x FDD ,y FDD ),(x FJ ,y FJ ),(x FTX ,y FTX ),(x FTD ,y FTD )]Area of risk site [ S FDX ,S FDD ,S FJ ,S FTX ,S FTD ]The stomach risk part segmentation model can be a network model for image segmentation such as Unet++, mask-RCNN, and the like, and the distance between the intestinal segmented image and each risk part is list d =[d FDX ,d FDD ,d FJ ,d FTX ,d FTD ]Wherein the distance calculation formula is Euclidean distance between the centroid of the enteric segmentation image and the centroid of each risk part segmentation image, and the intersection ratio of the enteric segmentation image and the distance area of each risk part is
The more complex the intestinal segmented image morphology is, the greater the risk factor is.
Specifically, centroid (x C ,y C ) Acquiring the minimum circumcircle and the minimum circumcircle radius r of the intestinal segmented image C Obtaining the number n of non-enteric segmented regions in the minimum circumscribing circle f And the non-enteric segmented area s fi Centroid (x) fi ,y fi ) The morphological quantization value isWherein W is C ,H C The width and height of the minimum circumscribed rectangle of the intestinal segmented image are respectively.
In one embodiment, the trained machine learning classifier includes a feature fitting sub-network and a classification sub-network; inputting each third characteristic quantization value into a trained machine learning classifier for classification, and obtaining a gastroscope image intestinal risk level classification result, wherein the step comprises the following steps: fitting each third characteristic quantized value by adopting a characteristic fitting sub-network to obtain a judgment coefficient; based on the judgment coefficient, a classification sub-network is adopted for analysis, and a classification result is obtained.
Specifically, the fitting process is performed by performing the fitting process on each third feature quantized value of the feature fitting sub-network, and the corresponding weight of the fitting process is performed on each third feature quantized value according to the fitting result, so that the third feature quantized value label in the above embodiment is continued 9 ~label 10 For example, label is determined using decision trees, random forests, etc 9 ~label 10 The corresponding weights are respectively lambda 9 ,λ 10 Then the fusion eigenvalue is:the classification result is a low risk, high risk result.
The method for predicting the risk of the development of the gastrointestinal epithelial metaplasia comprises the following steps: performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image; the first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute; the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute; the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute. The invention fully considers the influence of the characteristic quantization values of a plurality of different attributes of the gastroscope image on the accuracy and intuitiveness of image processing, and effectively improves the intestinal risk level identification efficiency and identification accuracy.
The second aspect of the present invention provides a gastrointestinal epithelial metaplasia progression risk prediction apparatus for:
performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image;
the first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute;
the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute;
the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute.
In some embodiments, the method comprises:
the acquisition module is used for acquiring gastroscope images;
the segmentation module is used for carrying out marker segmentation on the gastroscope image to obtain a marker segmentation image;
the feature extraction module is used for extracting features of the first tag attributes from the marker segmentation image and obtaining first feature quantized values corresponding to the first tag attributes;
the feature extraction module is further used for extracting features of second tag attributes from the atrophic gastritis segmented image, obtaining second feature quantized values corresponding to the second tag attributes, extracting features of third tag attributes from the enteric segmented image, and obtaining third feature quantized values corresponding to the third tag attributes;
The generation module is used for inputting the first characteristic quantized value into a trained machine learning classifier to classify, obtaining a gastroscope image atrophic gastritis classification result, and inputting the second characteristic quantized value into the trained machine learning classifier to classify, obtaining a gastroscope image enteron classification result; and inputting the third characteristic quantization value into a trained machine learning classifier for classification to obtain a gastroscope image intestinal risk level classification result.
In some embodiments, the feature extraction module is to:
feature extraction of a first tag attribute is performed on the marker segmentation image, and a first feature quantization value corresponding to the first tag attribute is obtained, and the method comprises the following steps:
acquiring a blood vessel segmentation image in the marker segmentation image;
according to the formula:
obtaining a blood vessel characteristic quantification value label 1 Wherein n is v Is the number of blood vessels in the blood vessel segmentation image, s v The blood vessel area in the blood vessel segmentation image is that of the marker segmentation image;
according to the formula:
obtaining a fold characteristic quantization value label 2 Wherein s is z Is the fold area, n, in the marker region in the marker-segmented image z Is the number of folds in the marker region in the marker-segmented image, S is the marker-segmented image area, S f Is the fold area, n, in the non-marker region in the marker-segmented image f Is the number of folds in the non-marker region in the marker-segmented imageAn amount of;
according to the formula:
obtaining a color characteristic quantization value label 3 Wherein r is c ,g c ,b c Is the three-channel average color value, S is the area of the marker region in the marker-partitioned image, n c The average number is calculated for the rest list after the colors close to black are removed;
according to the formula:
obtaining diffuse characteristic quantized value label 4 Wherein M is the number of images taken of the entire stomach wall, w Qi ,h Qi Is the width and height of each image, S i The area from each image to the marker is divided by the marker and is based on the connected domain, w bi ,h bi Is the width and height of the marker, (x) bi ,y bi ) Is the centroid coordinates of the marker.
The blood vessel segmentation image is obtained by the steps of:
converting the marker segmentation image into a gray level image and then binarizing the gray level image;
invoking opencv tool kit cv2.erode (), and performing corrosion operation on the binarized marker segmentation image to obtain a marker segmentation image mask image;
invoking opencv tool kit cv2.medianlur (), and carrying out median filtering on the gray level images of the marker segmentation images;
calling opencv tool kit cv2.CreateCLAHE (), and performing histogram equalization on the obtained median filtered marker segmentation image;
Performing gamma conversion on the marker segmentation image subjected to histogram equalization;
calling opencv tool package cv2.file2D (), and performing convolution operation on the obtained marker segmentation image after gamma transformation;
bit-by-bit comparison is carried out on the convolved marker segmentation image and the marker segmentation image mask image, if a pixel value at a certain position in the mask image is 0, the pixel value of the convolved marker segmentation image at the certain position is 0, and a denoised marker segmentation image is obtained;
and carrying out contrast stretching on the denoised marker segmentation image to obtain a blood vessel segmentation image corresponding to the marker segmentation image.
In some embodiments, the feature extraction module is further to: feature extraction of a second label attribute is performed on the atrophic gastritis segmented image, and a second feature quantization value corresponding to the second label attribute is obtained, including the following steps:
according to the formula:
obtaining a roughness characteristic quantization value label 5 Wherein W is W ,H W Width and height of divided images of atrophic gastritis, P mean Is the average pixel value of the divided images of atrophic gastritis, img W Segmenting an image for atrophic gastritis, W 0 Binarizing the atrophic gastritis segmentation map according to a certain set threshold to obtain a maximum variance row, wherein W is greater than 0 0 <W W ;
According to the formula:
obtaining a villus sample characteristic quantization value label 6 Wherein N is r Is the number of villus-like regions, s ri Is the area of the villus-like region, w ri ,h ri Is the width and height of the minimum circumscribing rectangle of the villus-like region, n ri Is the corner point of the fluff-like area and is broken at the corner point to obtain the number of fluff sections;
according to the formula:
obtaining a bias white characteristic quantized value label 7 Wherein n is b Is the maximum number of class pixel points, (r) bi ,g bi ,b bi ) Pixel values of each pixel point of the category having the largest number of colors;
according to the formula:
obtaining a bright blue ridge characteristic quantized value label 8 Wherein W is L ,H L Is the width and height of the bright blue ridge segmented image, img L Is to segment the image for bright blue ridges, (x) RSL ,y RSL ) Is the centroid coordinate of the bright blue ridge segmented image, S RSL Is the bright blue ridge dividing image area, (x) RS ,y RS ) Is the centroid coordinates of the segmentation image of the dyeing and enlarging atrophic gastritis, S RS Is to dye and enlarge the area of the atrophic gastritis area,is the standard bright blue ridge three channel average pixel value, PL is the bright blue ridge split image average pixel value.
In some embodiments, the feature extraction module is further to: feature extraction of a third tag attribute is performed on the intestinal segmented image, and a third feature quantization value corresponding to the third tag attribute is obtained, and the method comprises the following steps:
According to the formula:
obtaining a position characteristic quantized value label 9 Wherein S is C Is the intestinal divided image area, [ (x) FDX ,y FDX ),(x FDD ,y FDD ),(x FJ ,y FJ ),(x FTX ,y FTX ),(x FTD ,y FTD )]Is the centroid coordinate of the stomach risk part segmentation image [ S ] FDX ,S FDD ,S FJ ,S FTX ,S FTD ]Is the area of the risk site, list d =[d FDX ,d FDD ,d FJ ,d FTX ,d FTD ]The distance between the intestinal segmented image and each risk part;
according to the formula:
obtaining morphological feature quantized value label 10 Wherein W is C ,H C The width and height of the smallest circumscribed rectangle of the intestinal segmented image are respectively, (x) C ,y C ) Is centroid coordinate of intestinal segmented image, r C Is the minimum circumcircle radius of the intestinal segmented image, n f Is the number of non-enteric segmented regions within the minimum circumscribing circle, s fi Area of non-enteric segmented region, (x) fi ,y fi ) Is the smallest circumscribing circular center of the intestinal segmented image.
In some embodiments, the production module is configured to input the third feature quantization value into a trained machine learning classifier to classify, to obtain a classification result of the gastroscope image intestinal risk level, where the classifier includes a feature fitting sub-network and a classification sub-network;
the obtained gastroscope image enteric risk level classification result comprises the following steps:
fitting the characteristic quantized values of the tag attributes by adopting the characteristic fitting sub-network to obtain a judgment coefficient;
and based on the judging coefficient, analyzing by adopting the classifying sub-network to obtain the identification result.
In the description of the present application, it should be noted that the azimuth or positional relationship indicated by the terms "upper", "lower", etc. are based on the azimuth or positional relationship shown in the drawings, and are merely for convenience of description of the present application and simplification of the description, and are not indicative or implying that the apparatus or element in question must have a specific azimuth, be configured and operated in a specific azimuth, and thus should not be construed as limiting the present application. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
It should be noted that in this application, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method of predicting risk of progression of gastrointestinal epithelial metaplasia, comprising the steps of:
performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image;
the first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute;
the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute;
the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute.
2. The method for predicting the risk of progression of gastrointestinal epithelial metaplasia according to claim 1, wherein the step of classifying the gastroscopic image enteron grade by obtaining the feature quantization value of the tag attribute of the marker segmented image, the atrophic gastritis segmented image and the enteron segmented image comprises the steps of:
obtaining a gastroscope image, and carrying out marker segmentation on the gastroscope image to obtain a marker segmentation image;
extracting the features of the first tag attributes from the marker segmented image, obtaining a first feature quantized value corresponding to the first tag attributes, inputting the first feature quantized value into a trained machine learning classifier for classification, and obtaining a classification result of the gastroscopic image atrophic gastritis;
extracting features of second tag attributes from the atrophic gastritis segmented image, obtaining second feature quantized values corresponding to the second tag attributes, inputting the second feature quantized values into a trained machine learning classifier for classification, and obtaining gastroscope image enteric classification results;
and carrying out feature extraction of a third tag attribute on the enteron segmented image, obtaining a third feature quantized value corresponding to the third tag attribute, inputting the third feature quantized value into a trained machine learning classifier for classification, and obtaining a gastroscope image enteron risk level classification result.
3. The method for predicting the risk of progression of gastrointestinal epithelial metaplasia according to claim 2, wherein the feature extraction of the first tag attribute is performed on the marker-segmented image, and the first feature quantification value corresponding to the first tag attribute is obtained, comprising the steps of:
acquiring a blood vessel segmentation image in the marker segmentation image;
according to the formula:
obtaining a blood vessel characteristic quantification value label 1 Wherein n is v Is the number of blood vessels in the blood vessel segmentation image, s v The blood vessel area in the blood vessel segmentation image is that of the marker segmentation image;
according to the formula:
obtaining a fold characteristic quantization value label 2 Wherein s is z Is the fold area, n, in the marker region in the marker-segmented image z Is the number of folds in the marker region in the marker-segmented image, S is the marker-segmented image area, S f Is the fold area, n, in the non-marker region in the marker-segmented image f Is the number of folds in the non-marker region in the marker-segmented image;
according to the formula:
obtaining a color characteristic quantization value label 3 Wherein r is c ,g c ,b c Is the three-channel average color value, S is the area of the marker region in the marker-partitioned image, n c The average number is calculated for the rest list after the colors close to black are removed;
According to the formula:
obtaining diffuse characteristic quantized value label 4 Wherein M is the number of images taken of the entire stomach wall, w Qi ,h Qi Is the width and height of each image, S i The area from each image to the marker is divided by the marker and is based on the connected domain, w bi ,h bi Is the width and height of the marker, (x) bi ,y bi ) Is the centroid coordinates of the marker.
4. A method of predicting the risk of progression of gastrointestinal epithelial metaplasia as set forth in claim 3 wherein said acquiring a vessel segmentation image from a marker segmentation image comprises:
converting the marker segmentation image into a gray level image and then binarizing the gray level image;
performing corrosion operation on the binarized marker segmentation image to obtain a marker segmentation image mask image;
carrying out median filtering on the gray level image of the marker segmentation image;
performing histogram equalization on the obtained marker segmentation image after median filtering;
performing gamma conversion on the marker segmentation image subjected to histogram equalization;
carrying out convolution operation on the obtained marker segmentation image after gamma conversion;
bit-by-bit comparison is carried out on the convolved marker segmentation image and the marker segmentation image mask image, if a pixel value at a certain position in the mask image is 0, the pixel value of the convolved marker segmentation image at the certain position is 0, and a denoised marker segmentation image is obtained;
And carrying out contrast stretching on the denoised marker segmentation image to obtain a blood vessel segmentation image corresponding to the marker segmentation image.
5. The method for predicting the risk of progression of gastrointestinal epithelial metaplasia according to claim 2, wherein the feature extraction of the second tag attribute is performed on the segmented image of atrophic gastritis to obtain a second feature quantification value corresponding to the second tag attribute, comprising the steps of:
according to the formula:
obtaining a roughness characteristic quantization value label 5 Wherein W is W ,H W Width and height of divided images of atrophic gastritis, P mean Is the average pixel value of the divided images of atrophic gastritis, img W Segmenting an image for atrophic gastritis, W 0 Binarizing the atrophic gastritis segmentation map according to a certain set threshold to obtain a maximum variance row, wherein W is greater than 0 0 <W W ;
According to the formula:
obtaining a villus sample characteristic quantization value label 6 Wherein N is r Is the number of villus-like regions, s ri Is the area of the villus-like region, w ri ,h ri Is the width and height of the minimum circumscribing rectangle of the villus-like region, n ri Is the corner point of the fluff-like area and is broken at the corner point to obtain the number of fluff sections;
according to the formula:
obtaining a bias white characteristic quantized value label 7 Wherein n is b Is the maximum number of class pixel points, (r) bi ,g bi ,b bi ) Pixel values of each pixel point of the category having the largest number of colors;
according to the formula:
obtaining a bright blue ridge characteristic quantized value label 8 Wherein W is L ,H L Is the width and height of the bright blue ridge segmented image, img L Is to segment the image for bright blue ridges, (x) RSL ,y RSL ) Is the centroid coordinate of the bright blue ridge segmented image, S RSL Is the bright blue ridge dividing image area, (x) RS ,y RS ) Is the centroid coordinates of the segmentation image of the dyeing and enlarging atrophic gastritis, S RS Is to dye and enlarge the area of the atrophic gastritis area,is the standard bright blue ridge three channel average pixel value, PL is the bright blue ridge split image average pixel value.
6. The method for predicting the risk of progression of gastrointestinal epithelial metaplasia according to claim 2, wherein the feature extraction of the third tag attribute is performed on the intestinal segmented image, and the third feature quantification value corresponding to the third tag attribute is obtained, comprising the steps of:
according to the formula:
obtaining a position characteristic quantized value label 9 Wherein S is C Is the intestinal divided image area, [ (x) FDX ,y FDX ),(x FDD ,y FDD ),(x FJ ,y FJ ),(x FTX ,y FTX ),(x FTD ,y FTD )]Is the centroid coordinate of the stomach risk part segmentation image [ S ] FDX ,S FDD ,S FJ ,S FTX ,S FTD ]Is the area of the risk site, list d =[d FDX ,d FDD ,d FJ ,d FTX ,d FTD ]The distance between the intestinal segmented image and each risk part;
according to the formula:
obtaining morphological feature quantized value label 10 Wherein W is C ,H C The width and height of the smallest circumscribed rectangle of the intestinal segmented image are respectively, (x) C ,y C ) Is centroid coordinate of intestinal segmented image, r C Is the minimum circumcircle radius of the intestinal segmented image, n f Is the number of non-enteric segmented regions within the minimum circumscribing circle, s fi Area of non-enteric segmented region, (x) fi ,y fi ) Is the smallest circumscribing circular center of the intestinal segmented image.
7. The method for predicting the risk of progression of gastrointestinal epithelial metaplasia according to claim 2, wherein the third feature quantification value is input into a trained machine learning classifier for classification, and a gastroscopic image intestinal metaplasia risk level classification result is obtained, wherein the classifier comprises a feature fitting sub-network and a classification sub-network;
the obtained gastroscope image enteric risk level classification result comprises the following steps:
fitting the characteristic quantized values of the tag attributes by adopting the characteristic fitting sub-network to obtain a judgment coefficient;
and based on the judging coefficient, analyzing by adopting the classifying sub-network to obtain the identification result.
8. A gastrointestinal epithelial metaplasia progression risk prediction device, configured to:
performing gastroscopic image intestinal grade classification by acquiring characteristic quantification values of tag attributes of the marker segmentation image, the atrophic gastritis segmentation image and the intestinal segmentation image;
The first label attribute of the marker segmentation image comprises a blood vessel attribute, a fold attribute, a color attribute and a diffuse attribute;
the second label attribute of the atrophic gastritis segmented image comprises a roughness attribute, a villus attribute, a off-white attribute and a bright blue ridge attribute;
the third tag attribute of the enteric segmented image comprises a position attribute and a morphology attribute.
9. A gastrointestinal epithelial metaplasia progression risk prediction apparatus according to claim 8, comprising:
the acquisition module is used for acquiring gastroscope images;
the segmentation module is used for carrying out marker segmentation on the gastroscope image to obtain a marker segmentation image;
the feature extraction module is used for extracting features of the first tag attributes from the marker segmentation image and obtaining first feature quantized values corresponding to the first tag attributes;
the feature extraction module is further used for extracting features of second tag attributes from the atrophic gastritis segmented image, obtaining second feature quantized values corresponding to the second tag attributes, extracting features of third tag attributes from the enteric segmented image, and obtaining third feature quantized values corresponding to the third tag attributes;
The generation module is used for inputting the first characteristic quantized value into a trained machine learning classifier to classify, obtaining a gastroscope image atrophic gastritis classification result, and inputting the second characteristic quantized value into the trained machine learning classifier to classify, obtaining a gastroscope image enteron classification result; and inputting the third characteristic quantization value into a trained machine learning classifier for classification to obtain a gastroscope image intestinal risk level classification result.
10. The gastrointestinal epithelial metaplasia progression risk prediction apparatus according to claim 8, wherein said feature extraction module is configured to:
feature extraction of a first tag attribute is performed on the marker segmentation image, and a first feature quantization value corresponding to the first tag attribute is obtained, and the method comprises the following steps:
acquiring a blood vessel segmentation image in the marker segmentation image;
according to the formula:
obtaining a blood vessel characteristic quantification value label 1 Wherein n is v Is the number of blood vessels in the blood vessel segmentation image, s v The blood vessel area in the blood vessel segmentation image is that of the marker segmentation image;
according to the formula:
obtaining a fold characteristic quantization value label 2 Wherein s is z Is the fold area, n, in the marker region in the marker-segmented image z Is the number of folds in the marker region in the marker-segmented image, S is the marker-segmented image area, S f Is the fold area, n, in the non-marker region in the marker-segmented image f Is the number of folds in the non-marker region in the marker-segmented image;
according to the formula:
obtaining a color characteristic quantization value label 3 Wherein r is c ,g c ,b c Is the three-channel average color value, S is the area of the marker region in the marker-partitioned image, n c The average number is calculated for the rest list after the colors close to black are removed;
according to the formula:
obtaining diffuse characteristic quantized value label 4 Wherein M is the number of images taken of the entire stomach wall, w Qi ,h Qi Is the width and height of each image, S i The area from each image to the marker is divided by the marker and is based on the connected domain, w bi ,h bi Is the width and height of the marker, (x) bi ,y bi ) Is the centroid coordinates of the marker.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310639573.8A CN116434920A (en) | 2023-05-31 | 2023-05-31 | Gastrointestinal epithelial metaplasia progression risk prediction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310639573.8A CN116434920A (en) | 2023-05-31 | 2023-05-31 | Gastrointestinal epithelial metaplasia progression risk prediction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116434920A true CN116434920A (en) | 2023-07-14 |
Family
ID=87081739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310639573.8A Withdrawn CN116434920A (en) | 2023-05-31 | 2023-05-31 | Gastrointestinal epithelial metaplasia progression risk prediction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116434920A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117238532A (en) * | 2023-11-10 | 2023-12-15 | 武汉楚精灵医疗科技有限公司 | Intelligent follow-up method and device |
-
2023
- 2023-05-31 CN CN202310639573.8A patent/CN116434920A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117238532A (en) * | 2023-11-10 | 2023-12-15 | 武汉楚精灵医疗科技有限公司 | Intelligent follow-up method and device |
CN117238532B (en) * | 2023-11-10 | 2024-01-30 | 武汉楚精灵医疗科技有限公司 | Intelligent follow-up method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800824B (en) | Pipeline defect identification method based on computer vision and machine learning | |
CN111985536B (en) | Based on weak supervised learning gastroscopic pathology image Classification method | |
EP3455782B1 (en) | System and method for detecting plant diseases | |
Panchal et al. | Plant diseases detection and classification using machine learning models | |
CN109410194B (en) | Esophageal cancer pathological image processing method based on deep learning | |
CN106023151B (en) | Tongue object detection method under a kind of open environment | |
CN108181316B (en) | Bamboo strip defect detection method based on machine vision | |
CN110189303B (en) | NBI image processing method based on deep learning and image enhancement and application thereof | |
CN116092013B (en) | Dangerous road condition identification method for intelligent monitoring | |
CN104794502A (en) | Image processing and mode recognition technology-based rice blast spore microscopic image recognition method | |
CN112862808A (en) | Deep learning-based interpretability identification method of breast cancer ultrasonic image | |
CN112464942A (en) | Computer vision-based overlapped tobacco leaf intelligent grading method | |
CN115797352B (en) | Tongue picture image processing system for traditional Chinese medicine health-care physique detection | |
CN105512612A (en) | SVM-based image classification method for capsule endoscope | |
CN116434920A (en) | Gastrointestinal epithelial metaplasia progression risk prediction method and device | |
CN104933723A (en) | Tongue image segmentation method based on sparse representation | |
Shambhu et al. | Edge-based segmentation for accurate detection of malaria parasites in microscopic blood smear images: a novel approach using FCM and MPP algorithms | |
CN113807180A (en) | Face recognition method based on LBPH and feature points | |
CN117496247A (en) | Method for recognizing shortened image of pathological morphological feature crypt of inflammatory bowel disease | |
CN117649373A (en) | Digestive endoscope image processing method and storage medium | |
CN117593540A (en) | Pressure injury staged identification method based on intelligent image identification technology | |
CN109948706B (en) | Micro-calcification cluster detection method combining deep learning and feature multi-scale fusion | |
KR102430946B1 (en) | System and method for diagnosing small bowel preparation scale | |
CN117197064A (en) | Automatic non-contact eye red degree analysis method | |
Rungruangbaiyok et al. | Chromosome image classification using a two-step probabilistic neural network. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230714 |