CN115131592A - Fundus image classification film reading system and fundus image classification film reading method - Google Patents
Fundus image classification film reading system and fundus image classification film reading method Download PDFInfo
- Publication number
- CN115131592A CN115131592A CN202110316350.9A CN202110316350A CN115131592A CN 115131592 A CN115131592 A CN 115131592A CN 202110316350 A CN202110316350 A CN 202110316350A CN 115131592 A CN115131592 A CN 115131592A
- Authority
- CN
- China
- Prior art keywords
- image
- result
- classification
- quality control
- classification result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure describes a system for interpreting fundus image classification. The film reading system comprises an acquisition module for acquiring fundus images; the device comprises a preprocessing module for preprocessing fundus images, and a first classification module for classifying the preprocessed fundus images by using a first classification model based on deep learning to obtain a first classification result and a classification result type; the grouping module divides the preprocessed fundus images into negative result images, positive result images and images to be reclassified; the first quality control module is used for acquiring a final classification result and an image to be arbitrated based on a quality control model configured by utilizing a preset negative prediction rate and a preset positive prediction rate; the second classification module is used for classifying the images to be reclassified by utilizing the second classification model so as to obtain a final classification result and the images to be arbitrated; and the arbitration module is used for arbitrating the image to be arbitrated to obtain an arbitration classification result. According to the present disclosure, a fundus image classification radiograph reading system and a fundus image radiograph reading method capable of improving the classification accuracy are provided.
Description
Technical Field
The present disclosure generally relates to a radiograph interpretation system and a radiograph interpretation method for fundus image classification.
Background
Medical images often contain many details of body structures or tissues. In modern hospitals, most of the treatment information is derived from medical images such as fundus images. In the clinic, a doctor can be helped to identify relevant diseases by understanding the details in medical images. Medical images have developed into the main method of clinical recognition of disease. However, conventional identification of disease information based on medical images relies primarily on the judgment of a professional physician based on experience. Under such circumstances, it has become a popular direction in the field of medical imaging to develop an automatic interpretation technique capable of assisting a doctor in identifying a relevant disease. With the development of artificial intelligence technology, slide reading technology based on computer vision and artificial intelligence such as machine learning has been developed and applied in medical image recognition.
For example, patent document 1(CN105513077A) discloses a system for diabetic retinopathy screening, the system including: the system comprises a fundus image acquisition device, an image processing and screening device and a report output device, wherein the fundus image acquisition device is used for acquiring or receiving fundus images of a detected person; the image processing and screening equipment is used for processing the fundus image, detecting whether lesions exist in the fundus image and then transmitting a detection result to the report output equipment; the report output device outputs a corresponding detection report based on the detection result.
However, in actual clinical applications, due to the diversity of fundus images, the screening system described in patent document 1 may output an erroneous or inaccurate detection report when processing certain fundus images, resulting in a decrease in the classification accuracy of the screening system.
Disclosure of Invention
The present disclosure has been made in view of the above circumstances, and an object thereof is to provide an image interpretation system and an image interpretation method for classifying fundus images, which can improve the accuracy of classification.
To this end, a first aspect of the present disclosure provides an image interpretation system for fundus image classification, including: an acquisition module for acquiring a fundus image; a pre-processing module for pre-processing the fundus image to obtain a pre-processed fundus image; a first classification module that receives the preprocessed fundus image, classifies the preprocessed fundus image using a first classification model based on deep learning to obtain a first classification result, and obtains a classification result type including whether reclassification is necessary or not based on the first classification result; a grouping module that divides the pre-processed fundus image into a negative result image, a positive result image, and an image to be re-classified based on the first classification result and the classification result type; a first quality control module, which comprises a negative quality control module and a positive quality control module, wherein the negative quality control module receives the negative result image, obtains a negative quality control result of the negative result image by using a first quality control model configured on the basis of a preset negative prediction rate and a first confidence threshold, if the negative quality control result is consistent with the first classification result, the negative quality control result is taken as a final classification result, otherwise, the negative result image is taken as a first image to be arbitrated, the positive quality control module receives the positive result image, obtains a positive quality control result of the positive result image by using a second quality control model configured on the basis of a preset positive prediction rate and a second confidence threshold, and if the positive quality control result is consistent with the first classification result, the positive quality control result is taken as the final classification result, otherwise, the positive result image is taken as a second image to be arbitrated; the second classification module is used for receiving the image to be reclassified, classifying the image to be reclassified by utilizing a second classification model which is based on deep learning and trained aiming at the image to be reclassified so as to obtain a second classification result, if the second classification result is consistent with the first classification result, taking the second classification result as the final classification result, and if not, taking the image to be reclassified as a third image to be arbitrated; and the arbitration module is used for receiving the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated and taking the images as images to be arbitrated, arbitrating the images to be arbitrated to obtain an arbitration classification result and taking the arbitration classification result as the final classification result. In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on the negative prediction rate and a positive result image with a higher risk is acquired based on the positive prediction rate and consistency judgment is performed, and for the image to be reclassified, the image to be reclassified is further classified by using the second classification model, and finally the image to be arbitrated is arbitrated. Therefore, the classification accuracy of the film reading system can be improved.
Further, in the film reading system according to the first aspect of the present disclosure, optionally, the first classification model includes, for each type of diabetic retinopathy, a plurality of sub-classification models for receiving the preprocessed fundus image and acquiring sub-classification results, and the first classification module acquires the first classification result based on the plurality of sub-classification results. Thereby, the first classification result can be obtained based on the plurality of sub-classification models.
In addition, in the scoring system according to the first aspect of the present disclosure, optionally, the preset negative prediction rate is 95% to 99%, and the preset positive prediction rate is 95% to 99%. Thus, a preset negative prediction rate and a preset positive prediction rate can be obtained.
Further, in the film reading system according to the first aspect of the present disclosure, optionally, the first classification module outputs the first classification result according to a retinopathy classification system used in british national retinopathy screening program. In this case, based on the retinopathy grading system which is already well applied, the classification accuracy of the film reading system can be further improved.
Further, in the radiographing system according to the first aspect of the present disclosure, optionally, the first classification result includes a non-retinopathy, a background stage, a proproliferation stage, and a proliferation stage; the negative result image includes the preprocessed fundus image for which the first classification result is no retinopathy and the classification result type is no need to be classified again; the positive result image includes the preprocessed fundus image for which the first classification result is a pre-proliferation stage or a proliferation stage and the classification result type is not required to be classified again; the image to be reclassified comprises the preprocessed fundus image of which the classification result type is required to be reclassified. In this case, the division of the preprocessed fundus image into a negative-result image, a positive-result image, and an image to be reclassified can facilitate the subsequent processing of each image. Therefore, the classification accuracy of the film reading system can be further improved.
In addition, in the image reading system according to the first aspect of the present disclosure, optionally, the image reading system further includes a self-checking module, where the self-checking module is configured to perform a spot check on the fundus image of the negative quality control result to determine whether the first confidence threshold meets a requirement, and perform a spot check on the fundus image of the positive quality control result to determine whether the second confidence threshold meets a requirement. In this case, it may be further confirmed whether the first confidence threshold and the second confidence threshold meet the requirements. Therefore, the classification accuracy of the film reading system can be improved.
In addition, in the slide reading system according to the first aspect of the present disclosure, optionally, the first confidence threshold is configured by using gold standard data based on the preset negative prediction rate; and configuring the second confidence threshold value by using gold standard data based on the preset positive prediction rate. Thereby, the confidence threshold can be determined.
In addition, in the radiograph interpretation system according to the first aspect of the present disclosure, optionally, the radiograph interpretation system further includes an output module, and the output module is configured to output a result report. This enables a result report to be output.
A second aspect of the present disclosure provides a method for interpreting fundus image classification, including: acquiring a fundus image; pre-processing the fundus image to obtain a pre-processed fundus image; classifying the preprocessed fundus image using a first classification model based on deep learning to obtain a first classification result and obtaining a classification result type including whether reclassification is required based on the first classification result; dividing the pre-processed fundus image into a negative result image, a positive result image and an image to be reclassified based on the first classification result and the classification result type; acquiring a negative quality control result of the negative result image by using a first quality control model which configures a first confidence coefficient threshold value based on a preset negative prediction rate, if the negative quality control result is consistent with the first classification result, taking the negative quality control result as a final classification result, otherwise, taking the negative result image as a first image to be arbitrated, acquiring a positive quality control result of the positive result image by using a second quality control model which configures a second confidence coefficient threshold value based on a preset positive prediction rate, if the positive quality control result is consistent with the first classification result, taking the positive quality control result as the final classification result, otherwise, taking the positive result image as a second image to be arbitrated; classifying the image to be reclassified by using a second classification model which is based on deep learning and trained aiming at the image to be reclassified to obtain a second classification result, if the second classification result is consistent with the first classification result, taking the second classification result as the final classification result, otherwise, taking the image to be reclassified as a third image to be arbitrated; and taking the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated as an image to be arbitrated, arbitrating the image to be arbitrated to obtain an arbitration classification result and taking the arbitration classification result as the final classification result. In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on the negative prediction rate and a positive result image with a higher risk is acquired based on the positive prediction rate and consistency judgment is performed, and for the image to be reclassified, the image to be reclassified is further classified by using the second classification model, and finally the image to be arbitrated is arbitrated. Therefore, the classification accuracy can be improved.
In addition, in the image interpretation method according to the second aspect of the present disclosure, optionally, the fundus image of the negative quality control result is subjected to a spot inspection to determine whether the first confidence threshold value meets a requirement, and the fundus image of the positive quality control result is subjected to a spot inspection to determine whether the second confidence threshold value meets a requirement. In this case, it may be further confirmed whether the first confidence threshold and the second confidence threshold meet the requirements. Therefore, the classification accuracy can be improved.
According to the present disclosure, a fundus image classification radiograph interpretation system and a fundus image radiograph interpretation method capable of improving classification accuracy can be provided.
Drawings
The disclosure will now be explained in further detail by way of example only with reference to the accompanying drawings, in which:
fig. 1 is an application scene diagram illustrating a radiographing method of fundus image classification according to an example of the present disclosure.
Fig. 2 is a block diagram showing a radiographing system for fundus image classification according to an example of the present disclosure.
Fig. 3(a) is a schematic diagram showing a fundus image according to an example of the present disclosure.
Fig. 3(b) is a schematic diagram showing a fundus image according to an example of the present disclosure.
Fig. 4 is a schematic diagram illustrating a convolution kernel employed in a convolutional neural network of a first classification module to which examples of the present disclosure relate.
Fig. 5 is a block diagram showing a radiographing system for fundus image classification according to an example of the present disclosure.
Fig. 6 is a flowchart illustrating a radiographing method of fundus image classification according to an example of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic, and the proportions of the dimensions of the components and the shapes of the components may be different from the actual ones. It is noted that the terms "comprises," "comprising," and "having," and any variations thereof, in this disclosure, for example, a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. All methods described in this disclosure can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The present disclosure relates to a fundus image classification film reading system 200 and a fundus image classification film reading method capable of improving classification accuracy. The fundus image classification interpretation system 200 may be simply referred to as the fundus image interpretation system 200, and the fundus image classification interpretation method may be simply referred to as the fundus image interpretation method.
Fig. 1 is an application scene diagram illustrating a radiographing method of fundus image classification according to an example of the present disclosure.
In some examples, the scoring method (described later) may be applied in the application scenario 100 as shown in fig. 1. In the application scenario 100, the operator 110 may acquire a fundus image of the fundus of the human eye 140 by controlling the acquisition device 130 connected to the terminal 120, after the acquisition device 130 completes acquisition of the fundus image, the terminal 120 may submit the fundus image to the server 150 through the computer network, the server 150 may implement a slide reading method by executing computer program instructions stored on the server 150, receive the fundus image through the slide reading method and generate a result report of the fundus image, and the server 150 may return the generated result report of the fundus image to the terminal 120. In some examples, the terminal 120 may display the results report. In other examples, the results report may be stored as an intermediate result in memory of the terminal 120 or the server 150. In other examples, the fundus image received by the reading method may be a fundus image stored in the terminal 120 or the server 150.
In some examples, the operator 110 may be a professional, such as an ophthalmologist. In other examples, operator 110 may be a general person with radiology training. The slide training may include, but is not limited to, the operation of the acquisition device 130 and the operation of the terminal 120 related to the slide method. In some examples, the terminal 120 may include, but is not limited to, a laptop, tablet, desktop, or the like. In some examples, the capture device 130 may be a camera. The camera may be, for example, a handheld fundus camera or a desktop fundus camera. In some examples, acquisition device 130 may be coupled to terminal 120 via a serial port. In some examples, the capture device 130 may be integrated in the terminal 120.
In some examples, server 150 may include one or more processors and one or more memories. Wherein the processor may include a central processing unit, a graphics processing unit, and any other electronic components capable of processing data, capable of executing computer program instructions. The memory may be used to store computer program instructions. In some examples, server 150 may implement the scoring method by executing computer program instructions on a memory. In some examples, the server 150 may also be a cloud server.
The reading system 200 according to the present disclosure is described in detail below with reference to the drawings. The radiograph interpretation system 200 is used for implementing the radiograph interpretation method. Fig. 2 is a block diagram showing a fundus image sorting radiographing system 200 according to an example of the present disclosure.
In some examples, as shown in fig. 2, the scoring system 200 can include an acquisition module 210, a preprocessing module 220, a first classification module 230, a grouping module 240, a first quality control module 250, a second classification module 260, and an arbitration module 270. In some examples, the acquisition module 210 may be configured to acquire a fundus image, the preprocessing module 220 may be configured to preprocess the fundus image to acquire a preprocessed fundus image, the first classification module 230 may be configured to classify the preprocessed fundus image and acquire a first classification result and a classification result type, the grouping module 240 may classify the preprocessed fundus image into a negative result image, a positive result image, and an image to be reclassified, the first quality control module 250 may acquire a final classification result, a first image to be arbitrated, and a second image to be arbitrated based on the negative result image and the positive result image, the second classification module 260 may acquire a final classification result and a third image to be arbitrated based on the image to be reclassified, and the arbitration module 270 may be configured to arbitrate the image to be arbitrated to acquire an arbitrated classification result and serve as the final classification result. In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on the negative prediction rate and a positive result image with a higher risk is acquired based on the positive prediction rate and consistency judgment is performed, and for the image to be reclassified, the image to be reclassified is further classified by using the second classification model, and finally the image to be arbitrated is arbitrated. This can improve the classification accuracy of the slide interpretation system 200.
Fig. 3(a) is a schematic diagram showing a fundus image according to an example of the present disclosure. Fig. 3(b) is a schematic diagram showing a fundus image according to an example of the present disclosure.
In some examples, the acquisition module 210 may be used to acquire fundus images. In some examples, the fundus image may be a color fundus image. The colorful fundus images can clearly present rich fundus information such as optic discs, optic cups, yellow spots, blood vessels and the like. In addition, the fundus image may be an image in an RGB mode, a CMYK mode, an Lab mode, a grayscale mode, or the like. In some examples, the fundus image may be acquired by acquisition device 130. In other examples, the fundus image may be a fundus image stored in the terminal 120 or the server 150. As an example of the fundus images, for example, fig. 3(a) and 3(b) are fundus images of different human eyes 140, respectively.
In some examples, the pre-processing module 220 may be used to pre-process the fundus image to obtain a pre-processed fundus image. Specifically, the preprocessing module 220 may acquire the fundus image output by the acquisition module 210, and preprocess the fundus image to obtain a preprocessed fundus image.
In some examples, the pre-processing module 220 may crop the fundus image. In general, since the fundus images acquired by the acquisition module 210 may have a problem of different image formats or sizes, it is necessary to crop the fundus images so that the fundus images are converted into images of a fixed standard form. Fixed standard form may refer to the images being of the same format and consistent size. For example, in some examples, the sizes of the fundus images after being preprocessed may be unified into fundus images of 256 × 256, 374 × 374, 512 × 512, 768 × 768, or 1024 × 1024 pixels.
In some examples, the pre-processing module 220 may perform normalization processing on the fundus image.
In some examples, the normalization process may include operations such as coordinate centering, scaling normalization, etc. of the fundus image. Therefore, the difference of different fundus images can be overcome, and the performance of the first classification model is improved. Additionally, in some examples, the pre-processing module 220 may include noise reduction, graying, etc. processing of the fundus image. This can highlight the features of the fundus image.
In some examples, the fundus images may also be classified directly, followed without pre-processing.
In some examples, the first classification module 230 may be used to classify the pre-processed fundus image and obtain a first classification result and a classification result type. In some examples, the first classification module 230 may also obtain a confidence level of the first classification result. In some examples, first classification module 230 may output the first classification result according to a retinopathy classification used by the british national retinopathy screening program. In some examples, the first classification result may include at least a non-retinopathy (R0), a background stage (R1), a pre-proliferative stage (R2), and a proliferative stage (R3). In this case, based on the retinopathy hierarchy that has been already used in maturity, the classification accuracy of the radiograph interpretation system 200 can be further improved. In some examples, the first classification result may also include non-diabetic macular edema (M0) and macular edema (M1).
Examples of the disclosure are not limited thereto, and in other examples, the first classification result may include at least a negative result and a positive result. In some examples, the first classification module 230 may screen out pre-processed fundus images that cannot be classified (e.g., pre-processed fundus images that are of too poor a picture quality to be classified).
In some examples, the classification result type may include a type of whether reclassification is required (e.g., reclassification required and reclassification not required). In some examples, the classification result type may be obtained based on the first classification result. In some examples, the classification result type of the preprocessed fundus image whose first classification result is the background period may be set to require reclassification, and the other preprocessed fundus images may be set not to require reclassification. In some examples, a determination may be made whether a re-classification is needed to obtain the classification result type based on the confidence of the first classification result. For example, a preprocessed fundus image having a first classification result lower than a preset confidence (e.g., 40% or 50%) is set to require reclassification, and the other preprocessed fundus images are set to require no reclassification.
In some examples, the first classification module 230 may classify the fundus image using a machine-learned algorithm to obtain a first classification result. In some examples, the machine-learned algorithm may be at least one of a traditional machine-learned algorithm and a deep-learned algorithm. In this case, an appropriate machine learning algorithm can be selected according to actual needs. In some examples, the first classification model may be established based on a machine-learned algorithm.
Fig. 4 is a schematic diagram illustrating a convolution kernel employed in the convolutional neural network of the first classification module 230 according to an example of the present disclosure.
In some examples, the first classification model established based on the deep learning algorithm may be a Convolutional Neural Network (CNN). In some examples, a Convolutional Neural Network (CNN) may automatically identify features in a fundus image using a 3 × 3 convolution kernel (see fig. 4). But examples of the disclosure are not limited thereto, in other examples, the convolution kernel of the Convolutional Neural Network (CNN) may be a 5 × 5 convolution kernel, a 2 × 2 convolution kernel, or a 7 × 7 convolution kernel, etc. In this case, the Convolutional Neural Network (CNN) has a feature of high efficiency in image feature recognition, and thus the performance of the radiographing system 200 can be effectively improved.
Examples of the disclosure are not limited thereto and in other examples, the machine-learned algorithm of the first classification module 230 may be a traditional machine-learned algorithm. In some examples, the algorithms for conventional machine learning may include, but are not limited to, linear regression algorithms, logistic regression algorithms, decision tree algorithms, support vector machine algorithms, or bayesian algorithms, among others. In this case, the fundus features in the fundus image may be extracted using an image processing algorithm, and then input into a first classification model established based on a conventional machine learning algorithm to classify the fundus image.
In some examples, the first classification model may include a plurality of sub-classification models. The respective sub-classification models may be for each type of diabetic retinopathy. Each sub-classification model may receive the pre-processed fundus image and obtain sub-classification results. In some examples, the first classification module 230 may obtain the first classification result based on a plurality of sub-classification results. Thereby, the first classification result can be obtained based on the plurality of sub-classification models.
Specifically, different sub-classification models may be established for non-retinopathy, background stage, pre-proliferation stage, non-diabetic macular edema, and trained to obtain whether a sub-classification result (for example, background stage or not) and confidence of a certain diabetic retinopathy, and then a first classification result may be obtained according to each sub-classification result and confidence. For example, the sub-classification result with the highest confidence may be obtained as the first classification result.
In some examples, the grouping module 240 may divide the pre-processed fundus image into a negative result image, a positive result image, and an image to be re-classified. In some examples, the pre-processed fundus image may be classified into a negative result image, a positive result image, and an image to be re-classified based on the first classification result and the classification result type. In this case, the preprocessed fundus image is divided into a negative-result image, a positive-result image, and an image to be re-classified, and it is possible to facilitate subsequent targeted processing for each image. This can further improve the classification accuracy of the slide interpretation system 200.
In particular, the negative result image may include a preprocessed fundus image for which the first classification result is no retinopathy and the classification result type is no need to be classified again. The positive result image may include a preprocessed fundus image of which the first classification result is a pre-proliferation stage or a proliferation stage and of which the classification result type does not need to be classified again. The image to be reclassified may include a preprocessed fundus image of which the classification result type is that needs to be reclassified. In some examples, the pre-processed fundus images that need to be classified again may include pre-processed fundus images whose first classification result is a background period and pre-processed fundus images that cannot be classified.
In some examples, the first quality control module 250 may obtain the final classification result, the first image to be arbitrated, and the second image to be arbitrated based on the negative result image and the positive result image. As shown in fig. 2, in some examples, the first quality control module 250 may include a negative quality control module 251 and a positive quality control module 252.
In some examples, the negative quality control module 251 may receive the negative result image, and obtain the negative quality control result using a first quality control model configured with a first confidence threshold based on a preset negative prediction rate. Generally speaking, the higher the negative prediction rate, the more sensitive the first quality control model is to negative results (i.e., no retinopathy), and the uncertain negative result images are classified as positive results (i.e., some diabetic retinopathy exists). Specifically, the preset negative prediction rate may be set according to requirements (e.g., customer requirements or default requirements) before the slide reading system 200 is formally released to the formal environment. And continuously adjusting and testing the first confidence coefficient threshold value based on the preset negative prediction rate to obtain the first confidence coefficient threshold value corresponding to the preset negative prediction rate. In some examples, the preset negative prediction rate may be 95% to 99%. For example, the predetermined negative prediction rate may be 95%, 96%, 97%, 98%, or 99%.
In some examples, the first confidence threshold may be configured with gold standard data. Thereby, a first confidence threshold can be determined. In some examples, the first confidence threshold may be back-solved according to a preset negative prediction rate based on the gold standard data. In some examples, in the inverse solution, the confidence thresholds within a preset range (e.g., the preset range may be 90% to 100%) may be traversed by a preset step size and the performance indicators may be solved to obtain the correspondence between the plurality of confidence thresholds and the plurality of sets of performance indicators, and the first confidence threshold may be determined based on the correspondence between the plurality of confidence thresholds and the plurality of sets of performance indicators and the preset negative prediction rate. In this case, the first confidence threshold can be determined conveniently and quickly by using the correspondence between the plurality of confidence thresholds and the plurality of sets of performance indicators. In some examples, the performance indicators may include sensitivity, specificity, positive prediction rate, and negative prediction rate.
Specifically, on the gold standard data, performance indexes such as sensitivity, specificity, positive prediction rate (the positive prediction rate may be the number of true positives/(the number of true positives + the number of false positives)) and negative prediction rate (the negative prediction rate may be the number of true negatives/(the number of true negatives + the number of false negatives)) corresponding to each confidence threshold are determined. In some examples, the confidence thresholds within the preset range may be traversed by a preset step size and the associated performance indicators may be solved. For example, the preset step size may be 0.01, 0.001, 0.0001, or the like. In this case, a table for recording a group of performance indicators corresponding to each confidence threshold may be created based on the correspondence between the multiple confidence thresholds and the multiple groups of performance indicators, where the table includes the confidence threshold corresponding to the performance indicator with the preset negative prediction rate, which is the first confidence threshold.
However, examples of the present disclosure are not limited thereto, and in other examples, the first confidence threshold corresponding to the preset negative prediction rate may be obtained by continuously adjusting and testing the first confidence threshold based on the preset negative prediction rate. For example, a first initial confidence threshold may be set based on the gold standard data and a negative prediction rate based on the first initial confidence threshold may be obtained, if an absolute difference between the negative prediction rate and a preset negative prediction rate is greater than a preset value (e.g., 1%, 2%, or 3%), the first initial confidence threshold is adjusted and the negative prediction rate and the preset negative prediction rate are continuously compared, otherwise, the first initial confidence threshold is used as the first confidence threshold.
In some examples, the first quality control model may be the same as the first classification model. In other examples, the first quality control model may be a model retrained for negative result images. In some examples, the negative quality control result may include a partial first classification result. For example, a negative quality control result may include non-retinopathy. In some examples, the confidence may be a probability that a negative result image belongs to a negative quality control result. In some examples, the first confidence threshold may include a positive clear threshold, a negative clear threshold, and a result threshold. In some examples, the first confidence threshold may be a plurality of sets if the first quality control model includes a plurality of first sub-quality control models for each type of diabetic retinopathy. For example, if there are n first sub-quality control models, then n sets of first confidence thresholds are required. In some examples, the first quality control model can output a negative quality control result according to the result threshold. In some examples, negative quality control results for which the first quality control model confidence is between the negative and positive release thresholds may be arbitrated.
In some examples, if the negative quality control result of the negative result image is consistent with the first classification result, the negative quality control result is used as the final classification result of the negative result image, otherwise, the negative result image is used as the first image to be arbitrated. In this case, the negative result image with a lower risk can be distinguished and compared with the first classification result by setting the preset negative prediction rate.
In some examples, the first quality control module 250 can include a positive quality control module 252. In some examples, the positive quality control module 252 may receive the positive result image and obtain the positive quality control result using a second quality control model configured with a second confidence threshold based on a preset positive prediction rate. Generally speaking, the higher the positive prediction rate, the more sensitive the second quality control model is to a positive result (i.e., the presence of a certain type of diabetic retinopathy), and the uncertain positive result image will be classified as a negative result (i.e., no retinopathy). Specifically, before the reading system 200 is formally released to the formal environment. The preset positive prediction rate may be set according to a requirement (e.g., a customer requirement or a default requirement). And continuously adjusting the second confidence coefficient threshold value based on the preset positive prediction rate and testing to obtain a second confidence coefficient threshold value corresponding to the preset positive prediction rate. In some examples, the preset positive prediction rate may be 95% to 99%. For example, the predetermined positive prediction rate may be 95%, 96%, 97%, 98%, or 99%.
In some examples, the second confidence threshold may be configured with gold standard data. Thereby, a second confidence threshold can be determined. In some examples, the second confidence threshold may be back-solved according to a preset positive prediction rate based on the gold standard data. In some examples, the second confidence threshold may be determined based on the correspondence of the plurality of confidence thresholds to the plurality of sets of performance indicators and a preset positive prediction rate. In this case, the second confidence threshold can be determined conveniently and quickly by using the correspondence between the plurality of confidence thresholds and the plurality of sets of performance indicators. For details, see the correlation description for the inverse solution in the first confidence threshold.
However, the examples of the present disclosure are not limited thereto, and in other examples, the second confidence threshold corresponding to the preset positive prediction rate may be obtained by continuously adjusting and testing the second confidence threshold based on the preset positive prediction rate.
In some examples, the second quality control model can be the same as the first classification model. In other examples, the second quality control model may be a model retrained for positive result images. In some examples, the positive quality control result may include a partial first classification result. For example, positive quality control results can include prophase and proliferative phase. In some examples, the confidence may be the probability that a positive result image belongs to a positive quality control result. In some examples, the second confidence threshold may include a positive clear threshold, a negative clear threshold, and a result threshold. The details may refer to the description associated with the first confidence threshold.
In some examples, if the positive quality control result of the positive result image is consistent with the first classification result, the positive quality control result is used as the final classification result of the positive result image, otherwise, the positive result image is used as the second image to be arbitrated. In this case, the positive result image with higher risk can be distinguished and compared with the first classification result by setting the preset positive prediction rate.
As described above, scoring system 200 may include a second classification module 260 (see fig. 2). In some examples, the second classification module 260 may obtain the final classification result and a third image to be arbitrated based on the image to be re-classified.
In some examples, the second classification module 260 may receive an image to be reclassified, classify the image to be reclassified using the second classification model to obtain a second classification result. The second classification model may be based on deep learning and trained on images to be reclassified. In some examples, the second classification result may include a partial first classification result. For example, the second classification result may include a non-retinopathy stage, a background stage, a pre-proliferative stage, and a proliferative stage. In some examples, when training for images to be re-classified, features associated with such images as images to be re-classified may be extracted and a second classification model may be trained in conjunction with the images to be re-classified. In some examples, the relevant features may include microaneurysms, bleeding, oozing, lint spotting, neovascularization, or macular degeneration. In some examples, the relevant characteristics may include health, age, and medical history. In some examples, the second classification model may also be trained in conjunction with color, texture, and shape features of the image to be re-classified.
In some examples, if the second classification result of the image to be reclassified is consistent with the first classification result, the second classification result is used as a final classification result of the image to be reclassified, otherwise, the image to be reclassified is used as a third image to be arbitrated.
In some examples, the arbitration module 270 may be configured to arbitrate the images to be arbitrated to obtain the arbitration classification result. In some examples, the arbitrated classification results may be used as final classification results. In some examples, the image to be arbitrated may be a first image to be arbitrated, a second image to be arbitrated, or a third image to be arbitrated. In some examples, the arbitrated classification result may be consistent with the first classification result. In some examples, the arbitration image may be judged by an arbitrator to obtain an arbitration classification result.
Fig. 5 is a block diagram showing a radiographing system 200 for fundus image classification according to an example of the present disclosure.
In some examples, as shown in fig. 5, the scoring system 200 further includes a self-test module 280, and in some examples, the self-test module 280 may be configured to perform a spot test on fundus images of negative quality control results to determine whether the first confidence threshold is satisfactory. In some examples, the self-test module 280 may be used to spot-test fundus images of positive quality control results to determine whether a second confidence threshold is satisfactory. In some examples, a sampling method may be utilized for the sampling. For example, random sampling may be used for spot checks. In some examples, the degree of sampling for the scoring system 200 just issued may be tightened (e.g., increase the sampling rate). In this case, it may be further confirmed whether the first confidence threshold and the second confidence threshold meet the requirements. This can improve the classification accuracy of the slide interpretation system 200.
In some examples, as shown in fig. 5, the scoring system 200 may further include an output module 290. In some examples, the output module may be to output a result report. In some examples, the output module 290 may determine at least one of the first classification result, the negative quality control result, the positive quality control result, the second classification result, the arbitration classification result, and the final classification result to output a result report of the fundus image. In some examples, the result report may include a confidence level for each result.
Hereinafter, the fundus image sorting radiograph interpretation method of the present disclosure is described in detail with reference to fig. 6. The method for interpreting fundus image classification according to the present disclosure may sometimes be simply referred to as an interpretation method. The radiograph interpretation method is applied to the radiograph interpretation system 200. Fig. 6 is a flowchart illustrating a radiographing method of fundus image classification according to an example of the present disclosure.
In some examples, as shown in fig. 6, the slide reading method may include acquiring a fundus image (step S110), preprocessing the fundus image to acquire a preprocessed fundus image (step S120), classifying the preprocessed fundus image and acquiring a first classification result and a classification result type (step S130), classifying the preprocessed fundus image into a negative result image, a positive result image, and an image to be re-classified (step S140), acquiring a final classification result, a first image to be arbitrated, and a second image to be arbitrated based on the negative result image and the positive result image (step S150), acquiring a final classification result and a third image to be arbitrated based on the image to be re-classified (step S160), and arbitrating the image to be arbitrated to acquire an arbitrated classification result and as a final classification result (step S170). In this case, the fundus image is divided into a negative result image, a positive result image, and an image to be reclassified, for the negative result image and the positive result image, a negative result image with a lower risk is acquired based on the negative prediction rate and a positive result image with a higher risk is acquired based on the positive prediction rate and consistency determination is performed, and for the image to be reclassified, the image to be reclassified is further classified by using the second classification model, and finally the image to be arbitrated is arbitrated. Therefore, the classification accuracy can be improved.
In some examples, in step S110, a fundus image may be acquired. The fundus image may be a color fundus image. The colorful fundus images can clearly present rich fundus information such as optic discs, optic cups, yellow spots, blood vessels and the like. For a detailed description, reference may be made to the related description of the acquisition module 210 in the scoring system 200.
In some examples, in step S120, the fundus image may be preprocessed to acquire a preprocessed fundus image. In some examples, the fundus image may be cropped, normalized, denoised, grayed, etc. For a detailed description, reference may be made to the related description of the preprocessing module 220 in the scoring system 200.
In some examples, in step S130, the preprocessed fundus image may be classified using a first classification model based on deep learning to obtain a first classification result. In some examples, the classification result type may be obtained based on the first classification result. In some examples, the classification result type includes a type whether re-classification is required. In some examples, the first classification result may be output according to a retinopathy classification system used by the british national retinopathy screening program. In some examples, the first classification result may include at least a non-retinopathy (R0), a background stage (R1), a pre-proliferative stage (R2), and a proliferative stage (R3). In this case, the classification accuracy can be further improved based on the already-matured applied retinopathy classification system. In some examples, an unclassifiable pre-processed fundus image may be acquired (e.g., a pre-processed fundus image with too poor a picture quality to be classified). In some examples, the first classification model may include a plurality of sub-classification models. The respective sub-classification models may be for each type of diabetic retinopathy. Each sub-classification model may receive the pre-processed fundus image and obtain sub-classification results. In some examples, the first classification module 230 may obtain the first classification result based on a plurality of sub-classification results. Thereby, the first classification result can be obtained based on the plurality of sub-classification models. The detailed description can be referred to the related description of the first classification module 230 in the scoring system 200.
In some examples, in step S140, the pre-processed fundus image may be classified into a negative-result image, a positive-result image, and an image to be re-classified based on the first classification result and the classification result type. In this case, the preprocessed fundus image is divided into a negative-result image, a positive-result image, and an image to be re-classified, and it is possible to facilitate subsequent targeted processing for each image. This can further improve the classification accuracy. In particular, the negative result image may include a preprocessed fundus image for which the first classification result is no retinopathy and the classification result type is no need to be classified again. The positive result image may include a preprocessed fundus image of which the first classification result is a pre-proliferation stage or a proliferation stage and of which the classification result type does not need to be classified again. The image to be reclassified may include a preprocessed fundus image of which the classification result type is that needs to be reclassified. In some examples, the pre-processed fundus images that need to be classified again may include pre-processed fundus images whose first classification result is a background period and pre-processed fundus images that cannot be classified. For a detailed description, reference may be made to the related description of the grouping module 240 in the scoring system 200.
In some examples, in step S150, a negative quality control result of the negative result image may be obtained using the first quality control model. In some examples, the first quality control model can be configured with a first confidence threshold based on a preset negative prediction rate. In some examples, if the negative quality control result is consistent with the first classification result, the negative quality control result is taken as a final classification result, otherwise, the negative result image is taken as the first image to be arbitrated. In some examples, a positive quality control result of the positive result image can be obtained using the second quality control model. The first quality control model can configure the second confidence threshold based on a preset positive prediction rate. In some examples, if the positive quality control result is consistent with the first classification result, the positive quality control result is used as a final classification result, otherwise, the positive result image is used as a second image to be arbitrated. In some examples, the first confidence threshold may be configured using gold standard data (i.e., the first confidence threshold is continually adjusted using gold standard data and ultimately determined). Thereby, a first confidence threshold can be determined. In some examples, the second confidence threshold may be configured using gold standard data (i.e., the second confidence threshold is continually adjusted using gold standard data and ultimately determined). Thereby, a second confidence threshold can be determined. For a detailed description, reference may be made to the description of the first quality control module 250 in the scoring system 200.
In some examples, in step S160, the image to be reclassified may be classified using a deep learning-based second classification model to obtain a second classification result. In some examples, the second classification model may be trained on images to be re-classified. In some examples, if the second classification result is consistent with the first classification result, the second classification result is used as a final classification result, otherwise, the image to be reclassified is used as a third image to be arbitrated. The detailed description can be referred to the related description of the second classification module 260 in the scoring system 200.
In some examples, in step S170, the image to be arbitrated may be arbitrated to obtain the arbitrated classification result and be the final classification result. In some examples, the image to be arbitrated may be a first image to be arbitrated, a second image to be arbitrated, or a third image to be arbitrated. The detailed description can be referred to the related description of the arbitration module 270 in the scoring system 200.
In some examples, the scoring method further comprises a self-test step (not shown). In some examples, in the self-inspection step, a fundus image of a negative quality control result may be spot inspected to determine whether the first confidence threshold is satisfactory. In some examples, a fundus image of a positive quality control result is spot checked to determine whether the second confidence threshold is satisfactory. In this case, it may be further confirmed whether the first confidence threshold and the second confidence threshold meet the requirements. This can improve the classification accuracy of the slide interpretation system 200. For a detailed description, reference may be made to the related description of the self-test module 280 in the scoring system 200.
In some examples, the scoring method further comprises an output step. In some examples, the outputting step may be for outputting the result report. For a detailed description, reference may be made to the description of the output module 290 in the scoring system 200.
While the invention has been described in detail in connection with the drawings and the embodiments, it is to be understood that the above description is not intended to limit the invention in any way. Those skilled in the art can make modifications and variations to the present invention as needed without departing from the true spirit and scope of the invention, and such modifications and variations are within the scope of the invention.
Claims (10)
1. A reading system for fundus image classification is characterized in that,
the method comprises the following steps: an acquisition module for acquiring a fundus image; a pre-processing module for pre-processing the fundus image to obtain a pre-processed fundus image; a first classification module that receives the preprocessed fundus image, classifies the preprocessed fundus image using a first classification model based on deep learning to obtain a first classification result, and obtains a classification result type including whether reclassification is necessary or not based on the first classification result; a grouping module that divides the preprocessed fundus image into a negative-result image, a positive-result image, and an image to be reclassified based on the first classification result and the classification result type; a first quality control module, which comprises a negative quality control module and a positive quality control module, wherein the negative quality control module receives the negative result image, obtains a negative quality control result of the negative result image by using a first quality control model configured on the basis of a preset negative prediction rate and a first confidence threshold, if the negative quality control result is consistent with the first classification result, the negative quality control result is taken as a final classification result, otherwise, the negative result image is taken as a first image to be arbitrated, the positive quality control module receives the positive result image, obtains a positive quality control result of the positive result image by using a second quality control model configured on the basis of a preset positive prediction rate and a second confidence threshold, and if the positive quality control result is consistent with the first classification result, the positive quality control result is taken as the final classification result, otherwise, the positive result image is taken as a second image to be arbitrated; the second classification module is used for receiving the image to be reclassified, classifying the image to be reclassified by utilizing a second classification model which is based on deep learning and trained aiming at the image to be reclassified so as to obtain a second classification result, if the second classification result is consistent with the first classification result, taking the second classification result as the final classification result, and if not, taking the image to be reclassified as a third image to be arbitrated; and the arbitration module receives the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated and takes the images as images to be arbitrated, and arbitrates the images to be arbitrated to obtain an arbitration classification result and takes the arbitration classification result as the final classification result.
2. The slide viewing system of claim 1, wherein:
the first classification model includes, for each type of diabetic retinopathy, a plurality of sub-classification models for receiving the pre-processed fundus image and obtaining sub-classification results, the first classification module obtaining the first classification result based on a plurality of the sub-classification results.
3. The system for interpreting photographs as claimed in claim 1, wherein:
the preset negative prediction rate is 95-99%, and the preset positive prediction rate is 95-99%.
4. The system for interpreting photographs as claimed in claim 1, wherein:
the first classification module outputs the first classification result according to a retinopathy classification system used by the british national retinopathy screening program.
5. The system of claim 4, wherein:
the first classification results include an absence of retinopathy, a background phase, a proproliferation phase and a proliferative phase;
the negative result image includes the preprocessed fundus image for which the first classification result is no retinopathy and the classification result type is no need to be classified again;
the positive result image includes the preprocessed fundus image for which the first classification result is a pre-proliferation stage or a proliferation stage and the classification result type is not required to be classified again;
the image to be reclassified comprises the preprocessed fundus image of which the classification result type is required to be reclassified.
6. The system for interpreting photographs as claimed in claim 1, wherein:
the film reading system further comprises a self-checking module, wherein the self-checking module is used for performing sampling inspection on the fundus image of the negative quality control result to judge whether the first confidence coefficient threshold value meets the requirement or not and is used for performing sampling inspection on the fundus image of the positive quality control result to judge whether the second confidence coefficient threshold value meets the requirement or not.
7. The system for interpreting photographs as claimed in claim 1, wherein:
configuring the first confidence threshold value by using gold standard data based on the preset negative prediction rate; and configuring the second confidence threshold value by using gold standard data based on the preset positive prediction rate.
8. The system for interpreting photographs as claimed in any one of claims 1 to 7, wherein:
the film reading system further comprises an output module, and the output module is used for outputting a result report.
9. A reading method for eye fundus image classification is characterized in that,
the method comprises the following steps: acquiring a fundus image; preprocessing the fundus image to acquire a preprocessed fundus image; classifying the preprocessed fundus image using a first classification model based on deep learning to obtain a first classification result and obtaining a classification result type including whether reclassification is required based on the first classification result; dividing the pre-processed fundus image into a negative result image, a positive result image and an image to be reclassified based on the first classification result and the classification result type; acquiring a negative quality control result of the negative result image by using a first quality control model which configures a first confidence coefficient threshold value based on a preset negative prediction rate, if the negative quality control result is consistent with the first classification result, taking the negative quality control result as a final classification result, otherwise, taking the negative result image as a first image to be arbitrated, acquiring a positive quality control result of the positive result image by using a second quality control model which configures a second confidence coefficient threshold value based on a preset positive prediction rate, if the positive quality control result is consistent with the first classification result, taking the positive quality control result as the final classification result, otherwise, taking the positive result image as a second image to be arbitrated; classifying the images to be reclassified by using a second classification model which is based on deep learning and is trained aiming at the images to be reclassified so as to obtain a second classification result, if the second classification result is consistent with the first classification result, taking the second classification result as the final classification result, otherwise, taking the images to be reclassified as third images to be arbitrated; and taking the first image to be arbitrated, the second image to be arbitrated or the third image to be arbitrated as an image to be arbitrated, arbitrating the image to be arbitrated to obtain an arbitration classification result, and taking the arbitration classification result as the final classification result.
10. The method for interpreting photos as claimed in claim 9, wherein:
and performing spot inspection on the fundus image of the negative quality control result to judge whether the first confidence coefficient threshold value meets the requirement or not and performing spot inspection on the fundus image of the positive quality control result to judge whether the second confidence coefficient threshold value meets the requirement or not.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110316350.9A CN115131592A (en) | 2021-03-24 | 2021-03-24 | Fundus image classification film reading system and fundus image classification film reading method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110316350.9A CN115131592A (en) | 2021-03-24 | 2021-03-24 | Fundus image classification film reading system and fundus image classification film reading method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115131592A true CN115131592A (en) | 2022-09-30 |
Family
ID=83374027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110316350.9A Pending CN115131592A (en) | 2021-03-24 | 2021-03-24 | Fundus image classification film reading system and fundus image classification film reading method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115131592A (en) |
-
2021
- 2021-03-24 CN CN202110316350.9A patent/CN115131592A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12114929B2 (en) | Retinopathy recognition system | |
US20230036134A1 (en) | Systems and methods for automated processing of retinal images | |
Niemeijer et al. | Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening | |
US10413180B1 (en) | System and methods for automatic processing of digital retinal images in conjunction with an imaging device | |
Niemeijer et al. | Information fusion for diabetic retinopathy CAD in digital color fundus photographs | |
WO2019206209A1 (en) | Machine learning-based fundus image detection method, apparatus, and system | |
Tennakoon et al. | Image quality classification for DR screening using convolutional neural networks | |
CN108154505A (en) | Diabetic retinopathy detection method and device based on deep neural network | |
CN112232448B (en) | Image classification method and device, electronic equipment and storage medium | |
KR20210012097A (en) | Diabetic retinopathy detection and severity classification apparatus Based on Deep Learning and method thereof | |
CN113177916A (en) | Slight hypertension fundus identification model based on few-sample learning method | |
WO2017020045A1 (en) | System and methods for malarial retinopathy screening | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN114612389A (en) | Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion | |
Kaur et al. | Automated Computer-Aided Diagnosis of Diabetic Retinopathy Based on Segmentation and Classification using K-nearest neighbor algorithm in retinal images | |
Jemima Jebaseeli et al. | Retinal blood vessel segmentation from depigmented diabetic retinopathy images | |
CN115206494A (en) | Film reading system and method based on fundus image classification | |
Crane et al. | Effect of simulated cataract on the accuracy of artificial intelligence in detecting diabetic retinopathy in color fundus photos | |
CN115131592A (en) | Fundus image classification film reading system and fundus image classification film reading method | |
Kibriya et al. | Melanoma Lesion Segmentation and Classification Using SegNet | |
CN115132326B (en) | Evaluation system and evaluation method for diabetic retinopathy based on fundus image | |
CN115206477A (en) | Fundus image-based film reading system and fundus image-based film reading method | |
CN115205189A (en) | Film reading system and method for fundus images | |
CN115120179A (en) | Interpretation system and interpretation method for detecting fundus abnormality based on fundus image | |
AU2020447271B2 (en) | Retinal image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |