CN116977253B - Cleanliness detection method and device for endoscope, electronic equipment and medium - Google Patents

Cleanliness detection method and device for endoscope, electronic equipment and medium Download PDF

Info

Publication number
CN116977253B
CN116977253B CN202211707166.8A CN202211707166A CN116977253B CN 116977253 B CN116977253 B CN 116977253B CN 202211707166 A CN202211707166 A CN 202211707166A CN 116977253 B CN116977253 B CN 116977253B
Authority
CN
China
Prior art keywords
image
cleanliness
classification
segmentation
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211707166.8A
Other languages
Chinese (zh)
Other versions
CN116977253A (en
Inventor
江代民
周国义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Opening Of Biomedical Technology Wuhan Co ltd
Original Assignee
Opening Of Biomedical Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Opening Of Biomedical Technology Wuhan Co ltd filed Critical Opening Of Biomedical Technology Wuhan Co ltd
Priority to CN202211707166.8A priority Critical patent/CN116977253B/en
Publication of CN116977253A publication Critical patent/CN116977253A/en
Application granted granted Critical
Publication of CN116977253B publication Critical patent/CN116977253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a cleanliness detection method and device for an endoscope, electronic equipment and a medium. The method includes a cleanliness detection operation, the cleanliness detection operation including: acquiring an image to be detected acquired by an endoscope; inputting the image to be measured into a cleanliness evaluation model to obtain a classification result and a segmentation result corresponding to the image to be measured, wherein the classification result is used for indicating whether the image to be measured is a scorable image or a non-scorable image, and the segmentation result is used for indicating the position of the content in the image to be measured; and if the classification result indicates that the image to be measured is a scorable image, determining cleanliness score of the image to be measured based on the segmentation result. The subjectivity and the contingency of manual judgment are avoided, and the accuracy of scoring is improved. The classification and segmentation fusion scheme can further improve accuracy of cleanliness scoring.

Description

Cleanliness detection method and device for endoscope, electronic equipment and medium
Technical Field
The invention belongs to the technical field of endoscopes, and particularly relates to a cleanliness detection method and device for an endoscope, electronic equipment and a storage medium.
Background
An electronic endoscope (endoscope) is a medical electronic optical instrument which can be inserted into a body cavity and an internal cavity of an organ of a human body to directly observe, diagnose and treat. The electronic endoscope may include a gastroscope, a enteroscope, and the like, and when the electronic endoscope collects an endoscopic image of a specific area, the cleanliness of an endoscopic examination area has a large influence on the imaging quality of the endoscopic image.
The cleanliness of the endoscopy area is generally scored according to a predetermined scoring criteria. For example, the intestinal images may be scored by boston scoring criteria. Such scoring, however, typically relies on manual labeling by the physician based on his or her experience, which is too subjective and somewhat occasional. For example, an intestinal image with a small amount of stool may give a score of 2 for some doctors, but may give a score of 3 for some doctors, and even marking the same image at different times may give different scores. Therefore, it is difficult for such existing scoring methods to objectively and accurately reflect the cleanliness score and cleanliness of the enteroscopy images.
Accordingly, a new cleanliness detection scheme for an endoscope is needed to solve the above-mentioned problems.
Disclosure of Invention
In order to solve at least in part the problems in the prior art, a cleanliness detection method and apparatus for an endoscope, an electronic device, and a storage medium are provided.
According to an aspect of the present invention, there is provided a cleanliness detection method for an endoscope, the method including a cleanliness detection operation including: acquiring an image to be detected acquired by the endoscope; inputting the image to be measured into a cleanliness evaluation model to obtain a classification result and a segmentation result corresponding to the image to be measured, wherein the classification result is used for indicating whether the image to be measured is a grading image or a non-grading image, and the segmentation result is used for indicating the position of the content in the image to be measured; and if the classification result indicates that the image to be detected is a scorable image, determining cleanliness score of the image to be detected based on the segmentation result.
Illustratively, the cleanliness evaluation model includes an encoder module, a decoder module and a classification head, and the inputting the image to be tested into the cleanliness evaluation model to obtain a classification result and a segmentation result corresponding to the image to be tested includes: inputting the image to be detected into the encoder module to obtain at least one group of encoding characteristics; inputting the at least one set of encoding features into the decoder module to obtain the segmentation result; inputting at least part of the at least one group of coding features into the classification head to obtain the classification result.
Illustratively, the encoder module and the classification header form a residual network, the encoder module comprising a plurality of convolution modules in the residual network, the classification header comprising a full connection layer in the residual network, the decoder module being implemented with a decoder module in a U-shaped network.
Illustratively, the cleanliness assessment model is obtained by training in the following manner: acquiring a first sample image and corresponding annotation information, wherein the annotation information comprises a classification label, and in the case that the classification label indicates that the first sample image is a scorable image, the annotation information further comprises a first segmentation label, the classification label is used for indicating whether the first sample image is a scorable image or a non-scorable image, and the first segmentation label is used for indicating the position of a content in the image to be detected; inputting the first sample image into the cleanliness evaluation model to obtain a prediction classification result and a prediction segmentation result corresponding to the first sample image, wherein the prediction classification result is used for indicating whether the first sample image is a scorable image or a non-scorable image, and the prediction segmentation result is used for indicating the position of the content in the image to be detected; calculating a first classification loss based on the classification label and the prediction classification result, calculating a segmentation loss based on the first segmentation label and the prediction segmentation result, and optimizing parameters of the cleanliness assessment model based on the first classification loss and the segmentation loss, in a case where the classification label indicates that the first sample image is a scorable image; in the case where the classification label indicates that the first sample image is a non-scorable image, a second classification loss is calculated based on the classification label and the predictive classification result, and parameters of the cleanliness assessment model are optimized based on the second classification loss.
Illustratively, the optimizing parameters of the cleanliness assessment model based on the first classification loss and the segmentation loss includes: carrying out weighted summation or weighted average on the first classification loss and the segmentation loss based on a preset weight to obtain a total loss; and optimizing parameters of the cleanliness evaluation model based on the total loss.
Illustratively, the determining the cleanliness score of the image to be measured based on the segmentation result includes: determining the area of the content in the image to be detected based on the segmentation result; calculating a first ratio between the area of the content in the image to be measured and the total area of the image to be measured; judging what preset proportion range the first proportion falls into in a plurality of preset proportion ranges, wherein the plurality of preset proportion ranges correspond to the plurality of preset cleanliness scores one by one; and determining that the preset cleanliness score corresponding to the specific preset proportion range is the cleanliness score of the image to be detected based on the specific preset proportion range in which the first proportion falls and based on the corresponding relation between the preset proportion range and the preset cleanliness score.
Illustratively, the correspondence between the preset scale range and the preset cleanliness score is obtained by: acquiring second partition labels and preset definition scores corresponding to a plurality of second sample images, wherein the second partition labels are used for indicating positions of contents in the corresponding second sample images, and the preset definition scores corresponding to the plurality of second sample images comprise the plurality of preset definition scores; for each second sample image in the plurality of second sample images, determining the area of the content in the second sample image based on a second segmentation label corresponding to the second sample image; calculating a second ratio between the area of the content in the second sample image and the total area of the second sample image; and determining a corresponding relation between the preset proportion range and a preset cleanliness score based on the second proportions corresponding to the second sample images.
The image to be detected is an image acquired by the endoscope aiming at a target examination area in real time; then, after determining a cleanliness score of the image to be measured based on the segmentation result if the classification result indicates that the image to be measured is a scorable image, the method further includes: comparing the cleanliness score to a cleanliness threshold; and if the cleanliness scores are lower than the cleanliness threshold, cleaning the target inspection area.
Illustratively, the method further comprises: and if the classification result indicates that the image to be detected is an unscorable image, executing corresponding unscoring feedback operation.
Illustratively, the no-score feedback operation includes: performing a disqualifying score operation, the disqualifying score operation comprising one or more of: deleting the image to be detected; deleting the segmentation result; and outputting prompt information.
For example, when the number of the images to be measured does not reach the preset number, the no-score feedback operation corresponding to any current image to be measured includes: returning to the step of acquiring the image to be detected acquired by the endoscope, and executing the cleanliness detection operation on the next image to be detected after the current image to be detected; when the number of the images to be detected reaches the preset number, the no-score feedback operation corresponding to the current image to be detected comprises the following steps: if the preset number of images to be measured are all unscorable images, performing unscorable operations, wherein the unscorable operations comprise one or more of the following: deleting the preset number of images to be detected; deleting the segmentation results corresponding to the preset number of images to be detected; and outputting prompt information.
According to another aspect of the present invention, there is provided a cleanliness detection device for an endoscope, the device including a cleanliness detection module comprising: the acquisition sub-module is used for acquiring an image to be detected acquired by the endoscope; the input sub-module is used for inputting the image to be detected into a cleanliness evaluation model to obtain a classification result and a segmentation result corresponding to the image to be detected, wherein the classification result is used for indicating whether the image to be detected is a grading image or a non-grading image, and the segmentation result is used for indicating the position of the content in the image to be detected; and the determining submodule is used for determining cleanliness scores of the images to be detected based on the segmentation results if the classification results indicate that the images to be detected are the images which can be scored.
According to still another aspect of the present invention, there is provided an electronic device including a processor and a memory, the memory storing a computer program, the processor executing the computer program to implement the above-described cleanliness detection method for an endoscope.
According to still another aspect of the present invention, there is provided a storage medium storing a computer program/instruction which, when executed by a processor, implements the above-described cleanliness detection method for an endoscope.
According to the technical scheme provided by the embodiment of the invention, the classification result and the segmentation result are obtained based on the image to be detected, and the cleanliness score is determined by utilizing the classification result and the segmentation result. In addition, the scheme can synchronously classify and divide the images to be detected, and based on classification results, only the images with grading values (namely grading images) are subjected to cleanliness grading, so that interference of scenes such as flushing, too close adhesion, field of view loss, too dark field of view or overexposure on the cleanliness grading can be removed, and the accuracy of the cleanliness grading is further improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
The following drawings are included to provide an understanding of the invention and are incorporated in and constitute a part of this specification. Embodiments of the present invention and their description are shown in the drawings to explain the principles of the invention. In the drawings of which there are shown,
FIG. 1 shows a schematic flow chart of a cleanliness detection operation according to one embodiment of the present application;
FIG. 2 shows a schematic flow chart of classifying and segmenting an image to be measured according to one embodiment of the invention;
FIG. 3 illustrates a schematic block diagram of a cleanliness detection module according to one embodiment of the present application; and
fig. 4 shows a schematic block diagram of an electronic device according to one embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art without any inventive effort, based on the embodiments described in the present invention shall fall within the scope of protection of the present invention.
To at least partially solve the above-described technical problems, an embodiment of the present invention provides a cleanliness detection method for an endoscope, the method including a cleanliness detection operation 100. FIG. 1 shows a schematic flow diagram of a cleanliness detection operation 100 according to one embodiment of the present invention. As shown in fig. 1, the cleanliness detection operation 100 may include the following steps S110, S120, and S130.
Step S110, acquiring an image to be detected acquired by an endoscope.
The image to be measured may be an original image directly collected by the electronic endoscope, for example, a gastroscopic image or a enteroscopic image directly collected by the electronic endoscope. The image to be measured may also be an image obtained by preprocessing the original image. The preprocessing operations may include all operations to facilitate image recognition, feature extraction, or the like, in order to enhance the detectability of the image, or to eliminate extraneous information in the image. For example, the preprocessing operation may include denoising operations such as mean filtering and gaussian filtering, and may also include operations such as image sharpening and image enhancement. The image to be measured may be an image containing the target examination region. The target examination region may be any biological tissue region including, but not limited to, the stomach, intestinal tract, esophagus, etc.
The image to be measured may be a single image of the target examination region acquired by the endoscope or may be a plurality of images. The plurality of images may be acquired by the endoscope continuously over a period of time and at intervals of time over a target examination region. In one embodiment, after the plurality of images are collected, the plurality of images may be screened and ranked according to the sharpness of the images, and a part of the unclear images may be removed, and the remaining relatively sharp images may be used as images to be measured. For example, ten images are collected, after the ten images are ranked according to definition, seven images with poor definition are removed, and the rest three images are used as images to be measured.
Step S120, inputting the image to be measured into a cleanliness evaluation model to obtain a classification result and a segmentation result corresponding to the image to be measured, wherein the classification result is used for indicating whether the image to be measured is a scorable image or a non-scorable image, and the segmentation result is used for indicating the position of the content in the image to be measured.
For example, the unscorable image may indicate that there is an abnormality in the current image, i.e., the cleaning state of the target inspection area cannot be accurately judged according to the information on the image. For example, the non-scorable image may be an image acquired under conditions of excessive interference, such as under scene of excessive water on the lens, too close an endoscope, blurred vision, lost vision, or too dark vision. The scorable image may represent a cleaning state in which the target examination region can be relatively accurately judged from the current image. That is, the interference in the scorable image is small, and the evaluation of the cleaning state is not affected.
The scorable image judges the cleaning state of the target inspection area mainly based on the content in the image. Wherein the content is a substance that adheres to the target examination area and affects the quality of the endoscopy. For example, during enteroscopy, the contents of the intestine may include opaque liquids, residual stool, and solid stool, among others; during gastroscopy, the contents of the stomach may include food residues, air bubbles, etc.
Illustratively, the cleanliness evaluation model may be a neural network model capable of simultaneously achieving classification and segmentation of an endoscopic image (including the image to be measured described above). For example, the cleanliness assessment model may include at least part of the network structure of one or more of the following neural network models: u-networks (U-Net), full convolutional neural networks (Fully Convolutional Networks, FCN), deep convolutional encoder-decoder structures for image segmentation (SegNet), pyramid scene parsing networks (pyramid scene parsing Network, PSPNet), residual networks (Residual networks), and the like. Of course, the neural network model described above is merely an example, and the cleanliness assessment model may also be implemented using other suitable network structures. In one embodiment, an image classification model and a semantic segmentation model can be respectively constructed through a ResNet neural network and a U-Net neural network, an image to be detected is classified through the ResNet neural network, and the position of the content in the image to be detected is obtained through segmentation through the U-Net neural network. Illustratively, the part of the ResNet neural network other than the full connection module may be used as a backbone network (backbone) of the cleanliness evaluation model, and the full connection module of the ResNet neural network may be connected behind the backbone as a classification head for image classification. In addition, the decoder portion of the U-Net neural network (comprising a plurality of upsampling modules) can also be coupled to the backbone as a decoder module to enable image segmentation by the decoder module. Alternatively, the batch normalization (Batch Normalization, BN) layer in the res net network structure may be replaced with an inverted bottleneck (Inverted Bottleneck, IBN) structure, employing the res net fusion IBN structure res net IBN as the backbone for the cleanliness assessment model. And the full connection module is connected with the back box to realize image classification, and the decoder module is connected with the back box to realize image segmentation. It can be appreciated that after the cleanliness evaluation model with the initial parameters is obtained, the cleanliness evaluation model with higher accuracy can be obtained through multiple machine learning.
Illustratively, the cleanliness assessment model may include an encoder module, a decoder module, and a classification head. The encoder module may be implemented with any downsampling module that may include, for example, convolutional layers, pooling layers, etc., whereby the encoder module may gradually reduce the size of features to obtain deep semantic features. The decoder module may be implemented with any up-sampling module, which may include a deconvolution layer or the like. The decoder module is able to gradually recover the details of the image and the corresponding spatial dimensions. The encoder module and the decoder module may be connected by a jump connection. In addition, the classification head may be implemented using a fully connected module, which may include one or more fully connected layers, an activation function layer, and the like. The activation function layer may be, for example, a softmax layer.
In step S130, if the classification result indicates that the image to be measured is a scorable image, the cleanliness score of the image to be measured is determined based on the segmentation result.
Illustratively, the classification result may be a probability value for indicating how large the probability that the image to be measured is a scorable image. The larger the value of the classification result, the larger the probability that the image to be measured is a scorable image. The value of the classification result can be compared with a preset threshold value, if the value is larger than the preset threshold value, the image to be measured is determined to be the image which can be scored, otherwise, the image to be measured is determined to be the image which cannot be scored. Of course, the classification result may be a probability value for indicating how large the probability that the image to be measured is an unscorable image.
For example, after confirming that the image to be measured is a scorable image according to the classification result, the cleanliness of the image to be measured may be scored according to the position of the content in the image to be measured obtained by the segmentation result.
Illustratively, the image to be measured may be segmented into mask images using a cleanliness evaluation model. The mask, i.e. the location of the content, may be represented on the mask image by a white area with a pixel value of 255. And the background, i.e., the remaining area outside the content, may be represented on the mask image by a black area with a pixel value of 0. In one embodiment, the area of the white area of the mask image can be calculated to be the area of the content, and the image proportion of the area of the content to the area of the image to be detected can be used as the basis of the cleanliness score. The larger the image scale, the larger the area occupied by the content in the target inspection area, i.e. the worse the cleanliness of the target inspection area, the lower the cleanliness score. The smaller the image scale, the smaller the area occupied by the content in the target inspection area, i.e., the better the cleanliness of the target inspection area, the higher the cleanliness score. For example, different area threshold ranges may be set according to different cleanliness scores, mapped to cleanliness scores according to different area threshold ranges.
Illustratively, the cleanliness score may employ boston scoring criteria. Taking the enteroscopy image as an example, a score of 0 represents a completely unprepared colon, i.e. the intestinal mucosa is not visible as solid stool is not cleared. Score 1 represents that the intestinal mucosa of the colon segment is visible due to staining, residual stool or opaque liquid. A score of 2 represents a portion of the stained, small pieces of fecal material or opaque liquid, and a large portion of the colonic segment mucosa was seen to be good. A score of 3 represents no staining, residual small pieces of fecal matter and opaque liquid, and the whole section of colonic mucosa was seen to be good. Alternatively, the scoring may also be performed using wortmann scoring criteria.
When the image is an unscorable image, it means that the cleaning state of the target inspection area cannot be accurately judged from the information on the image, and thus, there is no need to score such an image. When the image is an unscorable image, no operation may be performed, or a no-score feedback operation may be performed. Examples of no-score feedback operations are described below.
According to the technical scheme, the classification result and the segmentation result are obtained based on the image to be detected, and the cleanliness score is determined by utilizing the classification result and the segmentation result. In addition, the scheme can synchronously classify and divide the images to be detected, and based on classification results, only the images with grading values (namely grading images) are subjected to cleanliness grading, so that interference of scenes such as flushing, too close adhesion, field of view loss, too dark field of view or overexposure on the cleanliness grading can be removed, and the accuracy of the cleanliness grading is further improved.
Illustratively, the cleanliness assessment model includes an encoder module, a decoder module, and a classification head. Step S120, inputting the image to be measured into the cleanliness evaluation model, and obtaining the classification result and the segmentation result corresponding to the image to be measured may include step S121, step S122 and step S123.
Step S121, inputting the image to be detected into an encoder module to obtain at least one set of encoding features.
The encoder module may be regarded as a backstone of the cleanliness assessment model. As described above, the encoder module may be illustratively implemented with any downsampling module that may include, for example, convolutional layers, pooled layers, etc., whereby the encoder module may progressively reduce the size of features to obtain deep semantic features. In one embodiment, the encoder module may be implemented using a network architecture other than fully connected modules in a ResNet neural network, such as using a plurality of convolution modules in ResNet 50. First, an image to be measured is input into ResNet. And then, extracting information of different scales in the image to be detected by utilizing convolution of convolution kernels of different sizes in each convolution module of ResNet. The extracted features are the coding features.
Illustratively, the encoding features may include color information, luminance information, texture information, and boundary information. The coding features may be represented by a high-dimensional matrix. The coding features are obtained by extracting features of the image to be detected through an encoder module.
Step S122, inputting at least one set of coding features into the decoder module to obtain a segmentation result.
As described above, the decoder module may be illustratively implemented with any upsampling module, which may include a deconvolution layer or the like. The decoder module is able to gradually recover the details of the image and the corresponding spatial dimensions. The encoder module and the decoder module may be connected by a jump connection. In one embodiment, the decoder module can be implemented using a decoder module in a U-Net neural network. For example, the features extracted by each of the above-described encoder modules may be input into the decoder module via a skip connection. Then, the decoder module performs feature fusion after performing up-sampling operation on the features in sequence, and integrally splices the features on the channels to form an integral feature map. The global feature map may be represented by a binarized mask image, whereby the segmentation result is expressed using the mask image.
Step S123, inputting at least part of the at least one group of coding features into a classification head to obtain a classification result.
The classification head is used for classifying the images to be detected according to the coding characteristics, so as to distinguish the images to be detected from the network structure of the images which can be scored or the images which cannot be scored. The classification header may select a fully connected (which may be referred to as FC or classify) module to implement the classification function. In one embodiment, the encoder module may be selected in which the features extracted by one or more convolution modules are input into a classification head. For example, the features extracted by the first convolution module are input into the classification head and classified using the features. Alternatively, the features extracted by a plurality of convolution modules in the encoder module may be selected to be input into the classification head, respectively. For example, the features extracted by the first convolution module, the second convolution module, and the third convolution module are input together into the classification head, and classified using the features. Still alternatively, the characteristics of the last convolution module of the encoder module may be selected for input into the classification head and classification using the characteristics. The feature of the last convolution module of the encoder module can represent the full-image feature of the deepest layer of the image to be detected, and the most accurate classification result can be obtained based on the feature. Therefore, the characteristics of the last convolution module of the encoder module are preferably used to classify the image to be measured.
According to the technical scheme, in the cleanliness evaluation model, the encoder module is used as a backstone, the decoder module and the classification head share the characteristics extracted by the encoder module, and the scheme has small required parameter quantity, so that the training and reasoning efficiency of the cleanliness evaluation model is improved.
Illustratively, the encoder module and the classification header form a residual network, the encoder module comprising a plurality of convolution modules in the residual network, the classification header comprising a fully-connected layer in the residual network, the decoder module being implemented with a decoder module in a U-shaped network.
For example, the classification head may be accessed after the last convolution module of the encoder module to form a residual network, so that the extracted image features are input into the classification head for classification. In one embodiment, the encoder module is implemented using a network architecture other than a fully connected module in a ResNet50 neural network. The classification header may be implemented using a fully connected module in a ResNet50 neural network. The ResNet50 neural network has five phases (stages), namely stage0, stage1, stage2, stage3, and stage4. Each stage can be considered a convolution module. Features extracted by stage4 can be input into a subsequent full-connection layer classification for two classifications, so that whether the image to be detected can be scored or not is judged.
Illustratively, the split network is formed by the cooperation of an encoder module and a decoder module. The characteristics extracted by each convolution module of the encoder module are input into the decoder module, and the decoder module performs gradual up-sampling operation on the characteristics, so that each characteristic image is gradually restored to the characteristic image with the same size as the image to be detected. Then, the decoder module deconvolves the feature maps and splices the deconvolved feature maps on the channel to obtain an overall feature map. In one embodiment, the downsampled portion of the U-Net neural network is replaced with a ResNet50 neural network. First, feature maps are extracted by using stage0, stage1, stage2, stage3, and stage4 of the ResNet50 neural network, respectively. The signature is then input to the up-sampling portion of the U-Net neural network via a jump connection. And deconvoluting the feature images by the U-Net neural network, and splicing and fusing the deconvoluted feature images on the channels to obtain the overall feature images. The mask area on the overall feature map represents the area where the contents are located.
The residual network is easy to train, has flexible structure and can improve network precision by simply increasing network depth. Meanwhile, the U-Net is a simple and efficient network model. Therefore, according to the technical scheme, the full connection layer in the residual error network is reserved as the classification head, and the convolution module, the classification head and the U-Net decoder module of the residual error network are combined, so that whether an image to be detected can be scored and the content can be identified by using one model, and meanwhile, the advantages of the residual error network and the U-Net can be combined, and the cleanliness evaluation model which is simple, efficient, high in accuracy and convenient to train is obtained.
Fig. 2 shows a schematic flow chart of classifying and segmenting an image to be measured according to one embodiment of the invention. As shown in fig. 2, the downsampled portion of the U-Net neural network is replaced with a res Net50 neural network. It comprises two parts, namely a classification part and a segmentation part. The classifying portion is used for distinguishing whether the images can be scored, and the dividing portion is used for identifying the unclean object (content) portion of the scored images to obtain a mask (mask) of the unclean object. The classification step is that the image is input into ResNet50 neural network, and features are extracted through stage0, stage1, stage2, stage3 and stage4 respectively. The features extracted by stage4 are input into a Full connection class for two classification to distinguish whether the image to be detected is a grading image. The dividing part comprises the step of inputting the features respectively extracted by stage0, stage1, stage2, stage3 and stage4 in the classifying process into a decoder module of the U-Net neural network through jump connection (copy). The features corresponding to each stage are subjected to deconvolution conversion after being input into the U-Net neural network, and the converted features enter a channel corresponding to the previous stage through up-sampling and are fused with the features of the previous stage. For example, the features extracted by stage4 are converted by conv3×3 and output by the Relu function. The output characteristics are Up-sampled by Up-conv 2 x 2 and are fused with the characteristics extracted by stage 3. And when up-sampling is carried out to the last stage, the fused features are converted by conv1 x 1 and then output, and the mask on the output image is the position of the content.
Illustratively, the cleanliness detection operation 100 may further include a training step S150 for training a cleanliness assessment model. The training step of the cleanliness evaluation model may include step S151, step S152, step S153, and step S154.
In step S151, the first sample image and the corresponding labeling information are obtained, the labeling information includes a classification label, and in the case that the classification label indicates that the first sample image is a scorable image, the labeling information further includes a first segmentation label, where the classification label is used to indicate whether the first sample image is a scorable image or a non-scorable image, and the first segmentation label is used to indicate a position where a content in the image to be measured is located.
It will be appreciated that the number of first sample images may be one or more. Preferably, the first sample image corresponds to the type of image to be measured, for example, it belongs to the enteroscopy image or the gastroscopy image. Of course, this is also merely an example, and for example the image to be measured may be a enteroscopy image, while the first sample image may be a gastroscopy image or the like. The class label may include class information for indicating whether the first sample image is a scorable image or a non-scorable image. The first split tag may include position information for indicating the position of the content in the area to be measured. The category information and the position information can be obtained through equipment identification, and also can be obtained through manual labeling of labeling personnel. For example, areas of the contents of feces or opaque liquids may be marked by a marking person in the form of a trace. The envelope of the content may be annotated by an annotator. The envelope of the content may be regarded as a first split tag. Alternatively, the binarized mask image may be automatically generated based on the labeling result of the labeling person on the position of the content, wherein the pixel value of the region where the content is located is set to 255, and the pixel values of the other regions are set to 0. The binarized mask image may be used as a first segmentation label.
Step S152, inputting the first sample image into a cleanliness evaluation model to obtain a prediction classification result and a prediction segmentation result corresponding to the first sample image, wherein the prediction classification result is used for indicating whether the first sample image is a scorable image or a non-scorable image, and the prediction segmentation result is used for indicating the position of the content in the image to be detected.
For example, the initial cleanliness assessment model can be obtained using a fusion of ResNet50 neural network and U-Net neural network. The specific fusion manner is described in detail in the above embodiments, and will not be described here again. Inputting the first sample image into a cleanliness evaluation model, extracting features of the sample image by the cleanliness evaluation model, classifying the first sample image based on the features and performing image segmentation, so as to obtain a prediction classification result and a prediction segmentation result. The meaning and the obtaining manner of the prediction classification result and the prediction segmentation result corresponding to the first sample image and the classification result and the segmentation result corresponding to the image to be measured are similar, and the implementation manner of the above-mentioned step S152 may be understood, which is not described herein.
In step S153, in the case where the classification label indicates that the first sample image is a scorable image, a first classification loss is calculated based on the classification label and the prediction classification result, a segmentation loss is calculated based on the first segmentation label and the prediction segmentation result, and parameters of the cleanliness evaluation model are optimized based on the first classification loss and the segmentation loss.
Illustratively, the classification losses described herein (including the first classification loss and the second classification loss) are used to represent classification errors, which may be derived based on the differences between the predicted classification results and the classification labels. The corresponding classification penalty is a first classification penalty when the first sample image is a scorable image and a second classification penalty when the first sample image is a non-scorable image. The terms "first" and "second" are used herein primarily for distinguishing between, and not for the purpose of, other special meaning, such as sequential or otherwise. The first classification loss and the second classification loss may be calculated based on the same loss function or may be calculated based on different loss functions, but both are used to represent the error between the predicted classification result and the classification label.
In one embodiment, the first sample image is a plurality of samples, and training is repeated using the plurality of first sample images. For example, the number of first sample images is 100, and the class labels of the 100 first sample images indicate that all are scorable images (positive samples). Inputting the 100 first sample images into a cleanliness evaluation model, and obtaining a prediction classification result which shows that 8 first sample images exist as unscorable images, wherein errors exist between the prediction classification result and the classification labels, and the first classification loss can be obtained according to the errors.
For example, the first classification loss may be calculated using a binary cross entropy loss function. Wherein y is gt Representing class labels, y pre Representing the prediction classification result, loss cls1 Representing a first classification loss.
Loss cls1 =-(y gt log(y pre )+(1-y gt )log(1-y pre ))。
Illustratively, the segmentation loss represents an error in the recognition result of the location of the content. Similarly, the segmentation loss may be derived based on the difference between the first segmentation label and the predicted segmentation result. In one embodiment, the segmentation loss may be calculated using a bicellus loss function as follows:
wherein, loss seg Representing the segmentation loss, pred U true represents the number of intersection elements between the first segmentation tag and the prediction segmentation result, and pred U true represents the total number of elements of the first segmentation tag and the prediction segmentation result.
Illustratively, after the above-described first classification loss and segmentation loss are obtained, the total loss (first total loss) may be obtained using both losses. The parameters in the initial cleanliness assessment model can then be optimized using back-propagation and gradient descent algorithms based on the first total loss. The optimization of the parameters may be performed iteratively until the cleanliness estimation model reaches a converged state.
Illustratively, after deriving the first classification Loss, the Loss may be based on the first classification cls1 And partition Loss seg Obtaining the total Loss sum . Illustratively, the total Loss sum Can be calculated according to the following formula:
Loss sum =Loss cls1 +Loss seg
in step S154, in the case where the classification label indicates that the first sample image is an unscorable image, a second classification loss is calculated based on the classification label and the prediction classification result, and the parameters of the cleanliness evaluation model are optimized based on the second classification loss.
For example, the second classification penalty may be derived based on the difference between the predicted classification result and the classification label. In one embodiment, the first sample image is a plurality of samples, and training is repeated using the plurality of first sample images. For example, the number of first sample images is 100, and the class labels of the 100 first sample images indicate that all are non-scorable images (negative samples). Inputting the 100 first sample images into a cleanliness evaluation model, and obtaining a prediction classification result which shows that 8 first sample images exist as the scorable images, wherein errors exist between the prediction classification result and the classification labels, and the second classification loss can be obtained according to the errors.
Illustratively, the second classification loss may be calculated using a binary cross entropy loss function. The second classification loss is calculated in a similar manner to the first classification loss and will not be described in detail herein.
After the second classification loss is obtained, the second classification loss may be taken as a total loss (second total loss). The parameters in the initial cleanliness assessment model can then be optimized using back-propagation and gradient descent algorithms based on this second total loss. The optimization of the parameters may be performed iteratively until the cleanliness estimation model reaches a converged state.
For example, in the case where the loss calculation and optimization are performed based on a plurality of first sample images and positive and negative samples are simultaneously included therein, the first classification loss, the second classification loss, and the segmentation loss may also be integrated together to obtain a total loss (third total loss). The integration means may include summation, etc. Based on the third total loss, the parameters in the initial cleanliness assessment model can also be optimized using back-propagation and gradient descent algorithms.
When training is completed, the obtained cleanliness assessment model can be used for subsequent image cleanliness assessment, and the stage can be called an inference stage of the model.
According to the scheme, in the training process, when the classification label of the image is an 'unscorable image', no Loss of classification Loss is caused seg Calculate, at this time Loss sum =Loss cls . When the image classification labels are "scorable images", classification losses and segmentation losses can be calculated separately and then summed, i.e., loss sum =Loss cls +Loss seg
Illustratively, in the subsequent reasoning stage, the classification result may be judged first, and returned directly if the classification result is an "unscorable image". If the classification result is a "scorable image", the segmentation result is further processed, for example, the area ratio of the content is judged, and a corresponding cleanliness score is obtained according to the area ratio mapping of the content.
According to the technical scheme, the loss function can be adjusted through different classification labels, so that the effect of the cleanliness evaluation model is improved. Meanwhile, the accuracy of the classification result and the segmentation result of the image by the cleanliness evaluation model can be remarkably improved based on the first classification loss, the second classification loss and the segmentation loss, and therefore the accurate cleanliness evaluation result can be obtained by using the accurate segmentation result.
Illustratively, optimizing parameters of the cleanliness assessment model based on the first classification loss and the segmentation loss includes: carrying out weighted summation or weighted average on the first classification loss and the segmentation loss based on preset weights to obtain total loss; parameters of the cleanliness assessment model are optimized based on the total loss.
In one embodiment, different weights may be assigned according to the degree of influence of the first classification loss and the segmentation loss on the cleanliness assessment model. For example, a first class Loss is set cls1 The weight of (2) is lambda and the weight of the segmentation Loss is gamma, the Loss function Loss sum Can be expressed as:
Loss sum =λLoss cls1 +γLoss seg
according to the technical scheme, the accuracy of the obtained total loss can be improved by setting the preset weights for different losses, so that the accuracy of the cleanliness evaluation model can be improved by utilizing the losses.
Illustratively, the first classification loss and the second classification loss are cross entropy losses and the segmentation loss is biculoss. The specific representation has been described in detail in the above embodiments, and for brevity, the description thereof will not be repeated here.
According to the technical scheme, the classification loss and the segmentation loss are respectively represented by the cross entropy loss and the bicellus, so that the accuracy of the classification loss and the segmentation loss can be improved, and the accuracy of the cleanliness evaluation model is further improved by the classification loss and the segmentation loss.
Illustratively, determining the cleanliness score of the image to be measured based on the segmentation result includes: determining the area of the content in the image to be detected based on the segmentation result; calculating a first ratio between the area of the content in the image to be measured and the total area of the image to be measured; judging what preset proportion range of the multiple preset proportion ranges the first proportion falls into, wherein the multiple preset proportion ranges correspond to the multiple preset cleanliness scores one by one; and determining the preset cleanliness score corresponding to the specific preset proportion range as the cleanliness score of the image to be detected based on the specific preset proportion range in which the first proportion falls and based on the corresponding relation between the preset proportion range and the preset cleanliness score.
In the above embodiment in which the division result is represented by the mask image, the area of the mask, that is, the content area may be used as the area of the mask by counting the area of the white region. And then calculating a first proportion of the content area to the total area of the image to be measured, and taking the first proportion as a basis of cleanliness grading.
In one embodiment, enteroscopy images are scored using boston scoring criteria. The preset proportion range is a plurality of, and the preset proportion ranges are respectively corresponding to the cleanliness scores of 0-3. For example, the first ratio is denoted by r. r is less than or equal to r 1 Corresponding to 3 minutes, r 1 <r≤r 2 Corresponding to 2 minutes, r 2 <r≤r 3 Corresponding to 1 minute, r > r 3 Corresponding to 0 minutes. At this time, i.eAnd determining the cleanliness scores of the images to be detected according to the corresponding relation between the proportion of the content area occupied by the segmentation result and the scores. Specifically, for example, r 1 =0.1,r 2 =0.4,r 3 =0.9. When the first ratio r is less than 0.1, the cleanliness is scored as 3 points, when r is more than or equal to 0.1 and less than 0.4, the cleanliness is scored as 2 points, when r is more than or equal to 0.4 and less than or equal to 0.9, the cleanliness is scored as 1 point, and when r is more than or equal to 0.9, the cleanliness is scored as 0 point.
According to the technical scheme, the evaluation result of the cleanliness is determined based on the proportion of the area occupied by the content in the image to be measured, so that errors of manual evaluation can be avoided. Therefore, the technical scheme can remarkably improve the accuracy of evaluating the cleanliness of the image acquired by the endoscope.
Illustratively, the correspondence between the preset scale range and the preset cleanliness score is obtained by: acquiring second partition labels and preset definition scores corresponding to the second sample images, wherein the second partition labels are used for indicating positions of contents in the corresponding second sample images, and the preset definition scores corresponding to the second sample images comprise a plurality of preset definition scores; for each second sample image in the plurality of second sample images, determining the area of the content in the second sample image based on the second segmentation label corresponding to the second sample image; calculating a second ratio between the area of the content in the second sample image and the total area of the second sample image; and determining a corresponding relation between the preset proportion range and the preset cleanliness score based on the second proportions corresponding to the second sample images.
The second sample image may be obtained by separate acquisition, or the first sample image may be directly employed as the second sample image. Similar to the first split tag, the second split tag may also be obtained by device identification or manual labeling by a labeling person.
The preset sharpness score may be based on the boston score or wortmann score, or may be determined on its own as desired. In one embodiment, first, a plurality of second sample images are acquired, each of which is labeled with a respective second segmentation label and a preset cleanliness score. Wherein each of the plurality of preset cleanliness scores corresponds to at least one second sample image.
For each second sample image acquired, the area of the content and the second proportion of the total area of the image may be calculated based on the corresponding split label. And further determining the corresponding relation between the preset proportion range and the preset cleanliness score. For example, enteroscopy images are scored using boston scoring criteria. At this time, the scoring grades have four grades, namely, 0, 1, 2 and 3 grades, and the higher the score is, the smaller the second ratio is. The second sample image may be acquired, for example, by 10, and its corresponding preset sharpness scores are 3, 2, 1, 0, and 0 in order. An area of the content is determined based on the second segmentation labels of each second sample image, and a second ratio is determined from the content area. The second ratio is assumed to be 1%, 10%, 27%, 35%, 40%, 60%, 70%, 80%, 90%, 95% in this order. And finally, obtaining the corresponding relation between the preset proportion range and the preset cleanliness score according to the second proportion and the preset definition score. Namely, the proportion of the content area determined by the 0 score corresponding to the second division label is 90% -100%, the proportion of the content area determined by the 1 score corresponding to the second division label is 40% -90%, the proportion of the content area determined by the 2 score corresponding to the second division label is 10% -40%, and the proportion of the content area determined by the 3 score corresponding to the second division label is 0% -10%.
According to the technical scheme, the accurate corresponding relation between the preset proportion range and the preset cleanliness score can be obtained, so that the accuracy of evaluating the cleanliness of the image to be tested is improved based on the corresponding relation.
Illustratively, the number of images to be measured is a plurality, the plurality of images to be measured belonging to a plurality of different examination regions, the method further comprising: and carrying out weighted summation or weighted averaging on the cleanliness scores respectively corresponding to the plurality of different examination areas to obtain a total cleanliness score.
Taking enteroscopy as an example, the examination area can be divided into three sections of areas of rectum-sigmoid colon, transverse colon-descending colon and ascending colon-cecum, corresponding cleanliness scores are obtained by using three images to be detected of the three sections of areas respectively, and weighted summation or weighted averaging is carried out according to the three cleanliness scores of the three images to be detected, so that the total intestinal cleanliness score is obtained.
For example, different weights may be set according to different examination regions. In one embodiment, according to the above embodiment of the examination divided into three sections in the enteroscopy, the following criteria may be set: the total cleanliness score is more than 2 minutes, and the cleanliness detection result is cleanliness; the total cleanliness score is less than or equal to 2 minutes, and the cleanliness detection result is uncleanness. The weight for the recto-sigmoid colon segment was set to 30%, the weight for the transverse colon-descending colon was set to 50%, and the weight for the ascending colon-cecum was set to 20%. If the cleanliness score of the rectum-sigmoid colon segment is 2 points, the cleanliness score of the transverse colon-descending colon is 3 points, and the cleanliness score of the ascending colon-cecum is 1 point, the total cleanliness score=2×30% +3×50% +1×20% =2.3 points, and the detection result of the cleanliness of the whole intestinal tract can be judged to be clean.
According to the technical scheme, the accuracy of the overall cleanliness evaluation of the object to be inspected is improved by weighting and summing or weighting and averaging the cleanliness scores of different inspection areas.
Illustratively, in the case where the image to be measured is an image acquired by the endoscope in real time for the target examination region, if the classification result indicates that the image to be measured is a scorable image, the cleanliness detection operation 100 may further include the following steps after determining the cleanliness score of the image to be measured based on the segmentation result in step S130: comparing the cleanliness score to a cleanliness threshold; if the cleanliness score is below the cleanliness threshold, the target inspection area is cleaned.
The cleanliness threshold may be determined based on the accuracy at the time of the endoscopy. The higher the cleanliness threshold, the higher the inspection accuracy; the lower the cleanliness threshold, the lower the inspection accuracy. In the embodiment described above in which the enteroscopy image is scored using the boston scoring criteria, the cleanliness threshold may be set to 2 points. When the cleanliness score is less than 2 minutes, the target inspection area is cleaned. For example, the target examination region may be purged by spraying water directly through the endoscope system or by extending into other medical instruments through the instrument channel of the endoscope.
According to the technical scheme, the target examination area can be cleaned in time when the cleanliness of the target examination area is too low by setting the cleanliness threshold, so that the cleanliness of the target examination area can meet the requirements, and the quality of endoscopy is further ensured.
Illustratively, the cleanliness detection method may further include the following steps. And if the classification result indicates that the image to be detected is an unscorable image, executing corresponding no-scoring feedback operation.
According to the above embodiment, when the image to be measured is an unscorable image, it is indicated that there is an abnormality in the current image to be measured, and at this time, a corresponding unscorable feedback operation may be performed. Illustratively, the no-score feedback operation may include an examination of the endoscope based on the cause of the abnormality, such as prompting the user to wipe the lens, prompting the user to adjust the detection position, and the like.
According to the technical scheme, when the image to be detected is the non-grading image, corresponding non-grading feedback operation can be timely adopted. Thus, the response can be timely made when excessive interference cannot be accurately scored.
Illustratively, the no-score feedback operation includes: performing a disqualifying score operation may include one or more of: deleting the image to be detected; deleting the segmentation result; and outputting prompt information.
For example, after determining that the image to be measured is an unscorable image, the current image may be deleted, or the current segmentation result may be deleted, so as to reduce occupation of computer resources. The user can be reminded of adjusting the shooting angle of the endoscope through the prompt information. The prompt information can be one or more of text information, image information or video information, and can also be audio information and/or lamplight information. The audio information may be, for example, a beep or the like. The light information may be, for example, a high-strobe light signal, etc. Of course, the form of the prompt information is merely an example, and the information can be implemented in other suitable information forms. Alternatively, the endoscope system itself may also be inspected to remove system failures. It will be appreciated that the above operations are not limited to sequential order, and only one of them may be executed or multiple ones may be executed simultaneously. In the above scheme, in the case that the image to be measured is an unscorable image, the unscorable operation may be performed in time.
According to the technical scheme, abnormal cleanliness detection conditions can be timely processed by adopting unfavorable scoring operation.
For example, when the number of images to be measured does not reach the preset number, the no-score feedback operation corresponding to any current image to be measured includes: returning to the step of acquiring the image to be detected acquired by the endoscope, and executing cleanliness detection operation on the next image to be detected after the current image to be detected;
when the number of the images to be detected reaches the preset number, the no-scoring feedback operation corresponding to the current images to be detected comprises the following steps: if the predetermined number of images to be measured are each non-scorable images, performing a non-scorable operation may include performing one or more of: deleting a preset number of images to be detected; deleting segmentation results corresponding to a preset number of images to be detected; and outputting prompt information.
In one embodiment, the number of images to be measured pertaining to the unscorable image that need to be accumulated before performing the unscorable operation may be set as desired. For example, assuming that the preset number is three, if the first image to be measured is an unscorable image, the next image to be measured, that is, the second image to be measured, may be continuously acquired, and the cleanliness detection operation 100 is performed on the second image to be measured. If the second image to be measured is still an unscorable image, the next image to be measured, i.e., the third image to be measured, may be continuously acquired, and the cleanliness detection operation 100 is performed for the third image to be measured. If the third image to be measured is still an unscorable image, the unscorable operation can be executed. In other words, if at least one of the three images to be measured is a scorable image, the unfavorable scoring operation may not be performed. The execution mode of the unfavorable scoring operation is the same as that of the above embodiment, and for brevity, the description is omitted here.
In the above technical solution, the cumulative number of unscorable images is considered. If the number of unscorable images is small at first, the unscorable operation is not needed to be executed, the subsequent images are continuously detected, and if the images to be detected are still unscorable images after being accumulated for a plurality of times, the unscorable operation is executed to output prompt information or check the endoscope system and other operations. Therefore, the frequent triggering of the execution of inappropriate scoring operation due to the error of single detection can be avoided, and the situations of misoperation or false prompt of the system and the like are reduced.
According to still another aspect of the present invention, there is also provided a cleanliness detection device for an endoscope, the device including a cleanliness detection module. FIG. 3 shows a schematic block diagram of a cleanliness detection module 300 according to one embodiment of the present invention. As shown in fig. 3, the cleanliness detection module 300 may include: an acquisition sub-module 310, an input sub-module 320, and a determination sub-module 330.
An acquisition sub-module 310, configured to acquire an image to be detected acquired by the endoscope.
The input sub-module 320 is configured to input the image to be tested into the cleanliness evaluation model, obtain a classification result and a segmentation result corresponding to the image to be tested, where the classification result is used to indicate whether the image to be tested is a scorable image or a non-scorable image, and the segmentation result is used to indicate a position where the content in the image to be tested is located.
The determining sub-module 330 is configured to determine a cleanliness score of the image to be measured based on the segmentation result if the classification result indicates that the image to be measured is a scorable image.
Illustratively, the cleanliness assessment model includes an encoder module, a decoder module, and a classification head. The input sub-module 320 may include a first input unit, a second input unit, and a third input unit.
The first input unit is used for inputting the image to be detected into the encoder module to obtain at least one group of encoding characteristics.
And the second input unit is used for inputting at least one group of coding features into the decoder module to obtain a segmentation result.
And the third input unit is used for inputting at least part of the coding features in the at least one group of coding features into the classification head to obtain a classification result.
Illustratively, the encoder module and the classification header form a residual network, the encoder module comprising a plurality of convolution modules in the residual network, the classification header comprising a fully-connected layer in the residual network, the decoder module being implemented with a decoder module in a U-shaped network.
Illustratively, the cleanliness detection module 300 further includes a training sub-module for training a cleanliness assessment model. The training sub-module may include an acquisition unit, a fourth input unit, a first optimization unit, and a second optimization unit.
The device comprises an acquisition unit, a marking unit and a marking unit, wherein the acquisition unit is used for acquiring a first sample image and corresponding marking information, the marking information comprises a classification label, the marking information further comprises a first segmentation label when the classification label indicates that the first sample image is a scorable image, the classification label is used for indicating whether the first sample image is a scorable image or a non-scorable image, and the first segmentation label is used for indicating the position of content in an image to be detected.
The fourth input unit is used for inputting the first sample image into the cleanliness evaluation model to obtain a prediction classification result and a prediction segmentation result corresponding to the first sample image, wherein the prediction classification result is used for indicating whether the first sample image is a scorable image or a non-scorable image, and the prediction segmentation result is used for indicating the position of the content in the image to be detected.
And a first optimizing unit for calculating a first classification loss based on the classification label and the prediction classification result, calculating a segmentation loss based on the first segmentation label and the prediction segmentation result, and optimizing parameters of the cleanliness evaluation model based on the first classification loss and the segmentation loss, in the case that the classification label indicates that the first sample image is a scorable image.
And a second optimizing unit for calculating a second classification loss based on the classification label and the prediction classification result and optimizing parameters of the cleanliness evaluation model based on the second classification loss, in the case where the classification label indicates that the first sample image is an unscorable image.
The first optimizing unit may include a total loss obtaining subunit and an optimizing subunit, for example. A total loss obtaining subunit, configured to perform weighted summation or weighted average on the first classification loss and the segmentation loss based on a preset weight, to obtain a total loss; and the optimizing subunit is used for optimizing the parameters of the cleanliness evaluation model based on the total loss.
Illustratively, the determination submodule 330 may include a first determination unit, a calculation unit, a judgment unit, and a second determination unit.
And a first determining unit for determining the area of the content in the image to be measured based on the segmentation result.
And the calculating unit is used for calculating a first ratio between the area of the content in the image to be detected and the total area of the image to be detected.
The judging unit is used for judging what preset proportion range of the multiple preset proportion ranges the first proportion falls into, wherein the multiple preset proportion ranges correspond to the multiple preset cleanliness scores one by one.
The second determining unit is used for determining that the preset cleanliness score corresponding to the specific preset proportion range is the cleanliness score of the image to be detected based on the specific preset proportion range in which the first proportion falls and based on the corresponding relation between the preset proportion range and the preset cleanliness score.
Illustratively, the correspondence between the preset scale range and the preset cleanliness score is obtained by: acquiring second partition labels and preset definition scores corresponding to the second sample images, wherein the second partition labels are used for indicating positions of contents in the corresponding second sample images, and the preset definition scores corresponding to the second sample images comprise a plurality of preset definition scores; for each second sample image in the plurality of second sample images, determining the area of the content in the second sample image based on the second segmentation label corresponding to the second sample image; calculating a second ratio between the area of the content in the second sample image and the total area of the second sample image; and determining a corresponding relation between the preset proportion range and the preset cleanliness score based on the second proportions corresponding to the second sample images.
For example, in the case that the image to be measured is an image acquired by the endoscope in real time for a target examination region, the cleanliness detection module 300 may further include a comparison sub-module and a cleaning sub-module.
A comparing sub-module, configured to compare the cleanliness score with a cleanliness threshold after determining that the sub-module 330 determines the cleanliness score of the image to be tested based on the segmentation result if the classification result indicates that the image to be tested is a scorable image.
And the cleaning sub-module is used for cleaning the target inspection area if the cleanliness score is lower than the cleanliness threshold value.
Illustratively, the cleanliness detection module 300 may also include an execution sub-module. The execution submodule is used for executing corresponding no-score feedback operation if the classification result indicates that the image to be detected is an unscorable image.
Illustratively, the no-score feedback operation includes: performing unfavorable scoring operations, the unfavorable scoring operations including one or more of: deleting the image to be detected; deleting the segmentation result; and outputting prompt information.
For example, when the number of images to be measured does not reach the preset number, the no-score feedback operation corresponding to any current image to be measured includes: returning to the step of acquiring the image to be detected acquired by the endoscope, and executing cleanliness detection operation on the next image to be detected after the current image to be detected; when the number of the images to be detected reaches the preset number, the no-scoring feedback operation corresponding to the current images to be detected comprises the following steps: if the preset number of images to be measured are all unscorable images, performing unscorable operations, wherein the unscorable operations comprise one or more of the following: deleting a preset number of images to be detected; deleting segmentation results corresponding to a preset number of images to be detected; and outputting prompt information.
According to still another aspect of the present invention, there is also provided an electronic apparatus. Fig. 4 shows a schematic block diagram of an electronic device 400 according to an embodiment of the invention. As shown in fig. 4, the electronic device 400 includes a processor 410 and a memory 420. Wherein the memory 420 has stored therein computer program instructions for executing the above-described cleanliness detection method when executed by the processor 410.
According to still another aspect of the present invention, there is also provided a storage medium. Program instructions are stored on the storage medium for performing the cleanliness detection method described above when executed. The storage medium may include, for example, a storage component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Those skilled in the art will understand the specific implementation schemes of the cleanliness detection device, the electronic device and the storage medium by reading the above description about the cleanliness detection method, and for brevity, the detailed description is omitted here.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the invention and aid in understanding one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the invention. However, the method of the present invention should not be construed as reflecting the following intent: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in a cleanliness detection device according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing description is merely illustrative of specific embodiments of the present invention and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.

Claims (13)

1. A cleanliness detection method for an endoscope, the method comprising a cleanliness detection operation comprising:
acquiring an image to be detected acquired by the endoscope;
inputting the image to be measured into a cleanliness evaluation model to obtain a classification result and a segmentation result corresponding to the image to be measured, wherein the classification result is used for indicating whether the image to be measured is a grading image or a non-grading image, and the segmentation result is used for indicating the position of the content in the image to be measured;
determining a cleanliness score of the image to be tested based on the segmentation result if the classification result indicates that the image to be tested is a scorable image;
the cleanliness evaluation model is obtained by training in the following way:
acquiring a first sample image and corresponding annotation information, wherein the annotation information comprises a classification label, and in the case that the classification label indicates that the first sample image is a scorable image, the annotation information further comprises a first segmentation label, the classification label is used for indicating whether the first sample image is a scorable image or a non-scorable image, and the first segmentation label is used for indicating the position of a content in the image to be detected;
Inputting the first sample image into the cleanliness evaluation model to obtain a prediction classification result and a prediction segmentation result corresponding to the first sample image, wherein the prediction classification result is used for indicating whether the first sample image is a scorable image or a non-scorable image, and the prediction segmentation result is used for indicating the position of the content in the image to be detected;
calculating a first classification loss based on the classification label and the prediction classification result, calculating a segmentation loss based on the first segmentation label and the prediction segmentation result, and optimizing parameters of the cleanliness assessment model based on the first classification loss and the segmentation loss, in a case where the classification label indicates that the first sample image is a scorable image;
in the case where the classification label indicates that the first sample image is a non-scorable image, a second classification loss is calculated based on the classification label and the predictive classification result, and parameters of the cleanliness assessment model are optimized based on the second classification loss.
2. The method according to claim 1, wherein the cleanliness evaluation model includes an encoder module, a decoder module and a classification head, the inputting the image to be measured into the cleanliness evaluation model, obtaining classification results and segmentation results corresponding to the image to be measured includes:
Inputting the image to be detected into the encoder module to obtain at least one group of encoding characteristics;
inputting the at least one set of encoding features into the decoder module to obtain the segmentation result;
inputting at least part of the at least one group of coding features into the classification head to obtain the classification result.
3. The method of claim 2, wherein the encoder module and the classification header form a residual network, the encoder module comprising a plurality of convolution modules in the residual network, the classification header comprising a full connection layer in the residual network, the decoder module being implemented with a decoder module in a U-shaped network.
4. The method of claim 1, wherein optimizing parameters of the cleanliness assessment model based on the first classification loss and the segmentation loss comprises:
carrying out weighted summation or weighted average on the first classification loss and the segmentation loss based on a preset weight to obtain a total loss;
and optimizing parameters of the cleanliness evaluation model based on the total loss.
5. A method according to any one of claims 1-3, wherein said determining a cleanliness score of the image to be measured based on the segmentation result comprises:
Determining the area of the content in the image to be detected based on the segmentation result;
calculating a first ratio between the area of the content in the image to be measured and the total area of the image to be measured;
judging what preset proportion range the first proportion falls into in a plurality of preset proportion ranges, wherein the plurality of preset proportion ranges correspond to the plurality of preset cleanliness scores one by one;
and determining that the preset cleanliness score corresponding to the specific preset proportion range is the cleanliness score of the image to be detected based on the specific preset proportion range in which the first proportion falls and based on the corresponding relation between the preset proportion range and the preset cleanliness score.
6. The method of claim 5, wherein the correspondence between the preset scale range and the preset cleanliness score is obtained by:
acquiring second partition labels and preset definition scores corresponding to a plurality of second sample images, wherein the second partition labels are used for indicating positions of contents in the corresponding second sample images, and the preset definition scores corresponding to the plurality of second sample images comprise the plurality of preset definition scores;
For each of the plurality of second sample images,
determining the area of the content in the second sample image based on a second segmentation label corresponding to the second sample image;
calculating a second ratio between the area of the content in the second sample image and the total area of the second sample image;
and determining a corresponding relation between the preset proportion range and a preset cleanliness score based on the second proportions corresponding to the second sample images.
7. A method according to any one of claims 1-3, wherein the image to be measured is an image acquired by the endoscope in real time for a target examination region; then, after determining a cleanliness score of the image to be measured based on the segmentation result if the classification result indicates that the image to be measured is a scorable image, the method further includes:
comparing the cleanliness score to a cleanliness threshold;
and if the cleanliness scores are lower than the cleanliness threshold, cleaning the target inspection area.
8. A method according to any one of claims 1-3, wherein the method further comprises:
And if the classification result indicates that the image to be detected is an unscorable image, executing corresponding unscoring feedback operation.
9. The method of claim 8, wherein the no-score feedback operation comprises: performing a disqualifying score operation, the disqualifying score operation comprising one or more of: deleting the image to be detected; deleting the segmentation result; and outputting prompt information.
10. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
when the number of the images to be detected does not reach the preset number, the no-score feedback operation corresponding to any current image to be detected comprises the following steps: returning to the step of acquiring the image to be detected acquired by the endoscope, and executing the cleanliness detection operation on the next image to be detected after the current image to be detected;
when the number of the images to be detected reaches the preset number, the no-score feedback operation corresponding to the current image to be detected comprises the following steps: if the preset number of images to be measured are all unscorable images, performing unscorable operations, wherein the unscorable operations comprise one or more of the following: deleting the preset number of images to be detected; deleting the segmentation results corresponding to the preset number of images to be detected; and outputting prompt information.
11. A cleanliness detection device for an endoscope, the device comprising a cleanliness detection module comprising:
the acquisition sub-module is used for acquiring an image to be detected acquired by the endoscope;
the input sub-module is used for inputting the image to be detected into a cleanliness evaluation model to obtain a classification result and a segmentation result corresponding to the image to be detected, wherein the classification result is used for indicating whether the image to be detected is a grading image or a non-grading image, and the segmentation result is used for indicating the position of the content in the image to be detected;
a determining submodule, configured to determine a cleanliness score of the image to be measured based on the segmentation result if the classification result indicates that the image to be measured is a scorable image;
the cleanliness detection module further comprises a training sub-module for training the cleanliness evaluation model;
the training submodule includes:
the device comprises an acquisition unit, a marking unit and a marking unit, wherein the acquisition unit is used for acquiring a first sample image and corresponding marking information, the marking information comprises a classification label, and in the case that the classification label indicates that the first sample image is a grading image, the marking information further comprises a first segmentation label, the classification label is used for indicating whether the first sample image is a grading image or a non-grading image, and the first segmentation label is used for indicating the position of the content in the image to be detected;
A fourth input unit, configured to input the first sample image into the cleanliness evaluation model, and obtain a prediction classification result and a prediction segmentation result corresponding to the first sample image, where the prediction classification result is used to indicate whether the first sample image is a scorable image or a non-scorable image, and the prediction segmentation result is used to indicate a position where a content in the image to be measured is located;
a first optimizing unit configured to calculate a first classification loss based on the classification label and the prediction classification result, calculate a segmentation loss based on the first segmentation label and the prediction segmentation result, and optimize parameters of the cleanliness evaluation model based on the first classification loss and the segmentation loss, in a case where the classification label indicates that the first sample image is a scorable image;
and a second optimizing unit configured to calculate a second classification loss based on the classification label and the prediction classification result, and optimize parameters of the cleanliness evaluation model based on the second classification loss, in a case where the classification label indicates that the first sample image is a non-scorable image.
12. An electronic device comprising a processor and a memory, the memory having stored therein a computer program, the processor executing the computer program to implement the cleanliness detection method for an endoscope as claimed in any of claims 1-10.
13. A storage medium storing a computer program/instruction which, when executed by a processor, implements the cleanliness detection method for an endoscope according to any one of claims 1 to 10.
CN202211707166.8A 2022-12-29 2022-12-29 Cleanliness detection method and device for endoscope, electronic equipment and medium Active CN116977253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211707166.8A CN116977253B (en) 2022-12-29 2022-12-29 Cleanliness detection method and device for endoscope, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211707166.8A CN116977253B (en) 2022-12-29 2022-12-29 Cleanliness detection method and device for endoscope, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN116977253A CN116977253A (en) 2023-10-31
CN116977253B true CN116977253B (en) 2024-03-19

Family

ID=88473729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211707166.8A Active CN116977253B (en) 2022-12-29 2022-12-29 Cleanliness detection method and device for endoscope, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN116977253B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007175432A (en) * 2005-12-28 2007-07-12 Olympus Medical Systems Corp Image processing device and image processing method in image processing device
CN109740608A (en) * 2018-12-28 2019-05-10 北京工业大学 A kind of image partition method based on deep learning
CN110916606A (en) * 2019-11-15 2020-03-27 武汉楚精灵医疗科技有限公司 Real-time intestinal cleanliness scoring system and method based on artificial intelligence
CN111696083A (en) * 2020-05-20 2020-09-22 平安科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN113012162A (en) * 2021-03-08 2021-06-22 重庆金山医疗器械有限公司 Method and device for detecting cleanliness of endoscopy examination area and related equipment
CN113658179A (en) * 2021-10-19 2021-11-16 武汉大学 Method and device for detecting cleanliness of intestinal tract
CN114241557A (en) * 2021-12-13 2022-03-25 深圳绿米联创科技有限公司 Image recognition method, device and equipment, intelligent door lock and medium
CN115082448A (en) * 2022-07-26 2022-09-20 青岛美迪康数字工程有限公司 Method and device for scoring cleanliness of intestinal tract and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7278202B2 (en) * 2019-11-27 2023-05-19 富士フイルム株式会社 Image learning device, image learning method, neural network, and image classification device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007175432A (en) * 2005-12-28 2007-07-12 Olympus Medical Systems Corp Image processing device and image processing method in image processing device
CN109740608A (en) * 2018-12-28 2019-05-10 北京工业大学 A kind of image partition method based on deep learning
CN110916606A (en) * 2019-11-15 2020-03-27 武汉楚精灵医疗科技有限公司 Real-time intestinal cleanliness scoring system and method based on artificial intelligence
CN111696083A (en) * 2020-05-20 2020-09-22 平安科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN113012162A (en) * 2021-03-08 2021-06-22 重庆金山医疗器械有限公司 Method and device for detecting cleanliness of endoscopy examination area and related equipment
CN113658179A (en) * 2021-10-19 2021-11-16 武汉大学 Method and device for detecting cleanliness of intestinal tract
CN114241557A (en) * 2021-12-13 2022-03-25 深圳绿米联创科技有限公司 Image recognition method, device and equipment, intelligent door lock and medium
CN115082448A (en) * 2022-07-26 2022-09-20 青岛美迪康数字工程有限公司 Method and device for scoring cleanliness of intestinal tract and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
改进的卷积神经网络在肺部图像上的分割应用;钱宝鑫;肖志勇;宋威;;计算机科学与探索(第08期);102-111 *

Also Published As

Publication number Publication date
CN116977253A (en) 2023-10-31

Similar Documents

Publication Publication Date Title
US11164048B2 (en) Focus-weighted, machine learning disease classifier error prediction for microscope slide images
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
CN109829882B (en) Method for predicting diabetic retinopathy stage by stage
CN112819821B (en) Cell nucleus image detection method
CN110648303B (en) Fundus image analysis method, computer device, and storage medium
CN110974306B (en) System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
CN111127426B (en) Gastric mucosa cleanliness evaluation method and system based on deep learning
CN113205524B (en) Blood vessel image segmentation method, device and equipment based on U-Net
CN113420745B (en) Image-based target identification method, system, storage medium and terminal equipment
CN111080639A (en) Multi-scene digestive tract endoscope image identification method and system based on artificial intelligence
US20240079116A1 (en) Automated segmentation of artifacts in histopathology images
CN111833321B (en) Intracranial hemorrhage detection model with window adjusting optimization enhancement and construction method thereof
CN114581375A (en) Method, device and storage medium for automatically detecting focus of wireless capsule endoscope
CN114882014B (en) Dual-model-based fundus image quality evaluation method and device and related medium
JP2023547169A (en) Identification of autofluorescence artifacts in multiplexed immunofluorescence images
CN115210779A (en) Systematic characterization of objects in biological samples
CN114693971A (en) Classification prediction model generation method, classification prediction method, system and platform
CN113237881A (en) Method and device for detecting specific cells and pathological section detection system
CN116977253B (en) Cleanliness detection method and device for endoscope, electronic equipment and medium
JP7359163B2 (en) Discrimination device, cell cluster discrimination method, and computer program
CN114581402A (en) Capsule endoscope quality inspection method, device and storage medium
JP2024504958A (en) Method for generating tissue specimen images and computing system for performing the same
CN115690092B (en) Method and device for identifying and counting amoeba cysts in corneal confocal image
WO2022201729A1 (en) Image diagnosing system and image diagnosing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant