CN112700420A - Eye fundus image complementing and classifying method and system - Google Patents

Eye fundus image complementing and classifying method and system Download PDF

Info

Publication number
CN112700420A
CN112700420A CN202011644263.8A CN202011644263A CN112700420A CN 112700420 A CN112700420 A CN 112700420A CN 202011644263 A CN202011644263 A CN 202011644263A CN 112700420 A CN112700420 A CN 112700420A
Authority
CN
China
Prior art keywords
image
fundus image
model
fundus
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011644263.8A
Other languages
Chinese (zh)
Inventor
王一军
龚梦星
张航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202011644263.8A priority Critical patent/CN112700420A/en
Publication of CN112700420A publication Critical patent/CN112700420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fundus image complementing and classifying method and system, wherein the method comprises the following steps: carrying out noise detection on the fundus image, and marking a noise area; extracting blood vessel direction characteristics in the fundus image; filling the noise area by taking the extracted direction characteristics of the blood vessels as constraints; and carrying out image classification on the supplemented fundus images through a pre-trained fundus image classification model, and outputting the recognition and classification results of the fundus images. The damaged fundus image is well repaired by extracting the direction characteristics of the blood vessels and using the direction characteristics of the blood vessels as constraints to complement the noise area on the fundus image; the fundus image classification model is formed through improved convolution residual error network training to identify and classify fundus images, blood vessel characteristics on the fundus images can be extracted quickly and accurately, and the efficiency and accuracy of fundus image classification are improved.

Description

Eye fundus image complementing and classifying method and system
Technical Field
The invention relates to the field of image processing, in particular to a fundus image complementing and classifying method and system.
Background
The fundus image contains abundant blood vessel features. However, due to the influence of the external environment of image acquisition or dirt in eyes, noise exists in the acquired fundus images, blood vessel characteristic extraction is influenced by the noise areas, fundus image classification is inaccurate, and the prepared fundus classification result lacks of reference significance.
In addition, the conventional fundus image classification methods mainly include two methods:
firstly, through the doctor according to experience, carry out artificial classification to the fundus image, the classification result of this kind of fundus image classification mode is comparatively accurate usually, but the classification process seriously depends on doctor's personal experience, can't obtain extensive popularization, and this classification method is inefficient moreover, when needs diagnose numerous fundus images, will bring very big work load for the doctor.
And secondly, automatic identification and classification are carried out through a machine learning technology. The method realizes the automation of the identification and classification of the fundus images, but the blood vessels in the fundus images are usually very fine, the traditional machine learning identification method cannot accurately identify the blood vessel characteristics, and the difference between the blood vessel characteristics is difficult to identify for different fundus images, so the reference significance of the classification result of the fundus images is not large.
Disclosure of Invention
The invention provides a fundus image complementing and classifying method and system aiming at complementing fundus images, automatically identifying and classifying the complemented fundus images and improving the fundus image identification efficiency and identification accuracy.
In order to achieve the technical purpose, the technical proposal of the invention is that,
a fundus image complementing and classifying method comprises the following steps:
step S1, carrying out noise detection on the fundus image and marking a noise area;
step S2, extracting blood vessel direction characteristics representing the trend of blood vessels in the fundus image;
step S3, the noise area is completed by taking the extracted direction characteristics of the blood vessels as constraint;
and step S4, carrying out image classification on the supplemented fundus images through a pre-trained fundus image classification model, and outputting the identification and classification results of the fundus images.
In the method, the step S3 includes:
extracting main direction features of a blood vessel as a reference sequence according to a pixel point to be repaired, namely a blood vessel direction feature map in a neighborhood window of a noise region, and calculating an average value of gray values of all normal pixel points in the reference sequence as a reference value of noise repair;
calculating the distance weight between all normal pixel points and points to be repaired in the neighborhood window;
and calculating and accumulating the products of the distance weights between all normal pixel points and the point to be repaired in the neighborhood window and the reference value, and taking the accumulated value as the pixel value of the point to be repaired.
In the method, in step S4, the method of training the fundus image classification model in advance includes:
step S41 of acquiring a fundus image from the fundus image database as a sample image for model training;
step S42, cutting the sample image into image blocks to increase the number of samples;
step S43, image preprocessing is carried out on the image blocks to enhance the blood vessel characteristics;
step S44, training and forming a fundus image classification initial model by using the enhanced image blocks and the original sample images as training samples through an improved convolution residual error network;
step S45, performing performance evaluation on the fundus image classification initial model, and adjusting model training parameters according to the evaluation result;
and step S46, updating and training the fundus image classification initial model according to the adjusted model training parameters and detecting the model prediction success rate, finishing the fundus image classification model training when the power reaches a preset value, and otherwise, returning to the step S41 for circular execution.
The method, in the step S43, the image preprocessing the image block includes performing morphological top-and-bottom-hat transformation on a green channel image of the image block to enhance image contrast of the image block;
the method for performing morphological high-low cap transform on the image block comprises the following steps:
and performing top-hat transformation and low-hat transformation on the image block, and subtracting the image subjected to the top-hat transformation and the image subjected to the low-hat transformation to enhance the contrast of the image block.
In the step S43, the image preprocessing performed on the image block includes performing image enhancement on the image block by using a contrast-limited adaptive histogram equalization method.
In the method, in the step S44, the convolutional residual error network includes five sequentially connected residual error structure blocks, each of the residual error structure blocks includes 3 × 3 convolutional layers and one 1 × 1 convolutional layer, and an input image is input to the residual error structure block, sequentially extracted by feature convolution of the 3 × 3 convolutional layers and the 1 × 1 convolutional layers, and then output;
the input of the latter convolution layer in each residual error structure block is the accumulation result of the output of the last convolution layer and the input of the residual error structure block;
the fundus image is input through a first residual error structure block in the convolution residual error network, the output of the first residual error structure block is connected with the input of a second residual error structure block after being subjected to down-sampling, the output of the second residual error structure block is connected with the input of a third residual error structure block after being subjected to down-sampling, the output of the third residual error structure block is connected with the input of a fourth residual error structure block after being subjected to up-sampling, the output of the fourth residual error structure block is connected with the input of a fifth residual error structure block after being subjected to up-sampling, and the output of the fifth residual error structure block outputs the classification prediction result of the fundus image which is input through the classification prediction of a softmax classifier.
In the method, each residual structure block further comprises a batch normalization layer and a PReLU function activation layer connected with the batch normalization layer, an image input to the residual structure block is firstly subjected to batch normalization processing of the batch normalization layer, then subjected to nonlinear activation of a PReLU function of the PReLU function activation layer, and finally subjected to characteristic convolution extraction of 3 × 3 convolution layers and 1 × 1 convolution layer, and then output.
In the method, in the step S45, the evaluation index for evaluating the initial model performance of the classification of the fundus image includes at least one of an accuracy index for evaluating the accuracy of model prediction, a sensitivity index for evaluating the sensitivity of model prediction, a specificity index for evaluating the specificity of model prediction fundus image, and an AUC index for evaluating the segmentation performance of the model.
In the method, the accuracy index is calculated by the following formula:
Figure BDA0002875702500000031
wherein A represents the accuracy index;
m represents a blood vessel point on the fundus image with correct image classification initial model classification;
n represents a background point on the fundus image where the image classification initial model is wrongly classified;
the sensitivity index is calculated by the following formula:
Figure BDA0002875702500000032
wherein B represents the sensitivity index;
p represents a background point on the fundus image with correct image classification initial model classification;
q represents a blood vessel point on the fundus image for which the image classification initial model classification is erroneous;
the specificity index is calculated by the following formula:
Figure BDA0002875702500000041
wherein C represents the specificity index.
A fundus image completion and classification system is used for realizing the method and comprises the following steps:
the noise area detection module is used for detecting image noise on the fundus image and marking a noise area;
the blood vessel direction characteristic extraction module is used for extracting blood vessel direction characteristics in the fundus image;
the image completion module is used for completing the noise area by taking the extracted blood vessel direction characteristics as constraints;
and the image classification module is used for carrying out image classification on the supplemented fundus images through a pre-trained fundus image classification model and outputting an image classification result.
The system, the image classification module includes:
the training sample acquisition unit is used for acquiring an eye fundus image from an eye fundus image database as a sample image for model training;
the sample amplification unit is connected with the training sample acquisition unit and is used for cutting the sample image into image blocks so as to amplify the number of samples;
the image preprocessing unit is connected with the sample amplification unit and used for preprocessing the image blocks so as to enhance the blood vessel characteristics on the sample image;
the fundus image classification model training unit is connected with the image preprocessing unit and the training sample acquisition unit and is used for training to form a fundus image classification initial model by taking the enhanced image blocks and the original sample image as training samples through an improved convolution residual error network;
the model performance evaluation unit is connected with the eye fundus image classification model training unit and used for carrying out performance evaluation on the eye fundus image classification initial model to obtain a model performance evaluation result;
the model training parameter adjusting unit is connected with the model performance evaluating unit and used for adjusting model training parameters according to the evaluation result;
the eye fundus image classification model training unit is connected with the model training parameter adjusting unit and is also used for updating and training the eye fundus image classification initial model according to the adjusted model training parameters and finally training to form the eye fundus image classification model.
The technical effect of the invention is that,
1. the damaged fundus image is well repaired by extracting the direction characteristics of the blood vessels and completing the noise area on the fundus image by taking the direction characteristics of the blood vessels as constraints;
2. the fundus image classification model is formed through improved convolution residual error network training to identify and classify fundus images, blood vessel characteristics on the fundus images can be extracted quickly and accurately, characteristic comparison is carried out according to the extracted blood vessel characteristics and corresponding fundus diseases, the efficiency and accuracy of fundus image classification are improved, and the method is particularly suitable for effectively extracting fine blood vessel characteristics and effectively distinguishing differences of blood vessel characteristics of different fundus images.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a step diagram of a fundus image complementing and classifying method according to an embodiment of the present invention;
FIG. 2 is a diagram of the method steps for training the fundus image classification model according to the present invention;
FIG. 3 is a network structure diagram of a convolutional residual network for training the fundus image classification model;
FIG. 4 is a schematic view of the internal structure of the structural block;
FIG. 5 is a schematic diagram of the internal structure of the optimized structural block;
FIG. 6 is a schematic structural diagram of a fundus image completion and classification system according to an embodiment of the present invention;
fig. 7 is a schematic diagram of the internal structure of the image classification module in the fundus image completion and classification system.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
An embodiment of the present invention provides a method for complementing and classifying fundus images, as shown in fig. 1, including the steps of:
in step S1, noise detection is performed on the fundus image, and a noise region is marked. The step can be realized by common image noise detection methods, such as an average filter, wavelet denoising and the like, and can be selected according to actual conditions.
Step S2, a blood vessel direction feature representing the direction of a blood vessel is extracted from the fundus image. There are many existing methods for extracting the blood vessel direction feature, for example, the blood vessel direction feature can be extracted by using an nmrt (neighbor Matching Radon transform) algorithm, and the method can be selected according to specific needs.
And step S3, the noise area is completed by taking the extracted blood vessel direction characteristics as constraints. The method for completing the noise region by using the vessel direction characteristic as the constraint is briefly as follows:
firstly, extracting main direction features (features representing the trend of a blood vessel) of the blood vessel as a reference sequence according to a blood vessel direction feature map in a neighborhood window of points (pixel points) to be repaired, and calculating the average value of gray values of all normal pixel points in the reference sequence as a reference value for noise repair.
And then, calculating the distance weight between all normal pixel points and the point to be repaired in the neighborhood window. In this embodiment, the sum of the distance weights between all normal pixel points in the neighborhood window and the point to be repaired is equal to 1. Generally, the closer the distance between the pixel points is, the closer the color information of the two pixels, such as contrast, brightness, intensity, etc., the closer the information of the two pixels is, the feature of the main direction of the blood vessel has a certain length, and the distance between the broken place and each pixel of the neighborhood window is inconsistent, so that each pixel in the neighborhood window with inconsistent distance to the pixel to be repaired needs to be given a corresponding weight, the sum of the distance weights in this embodiment is 1, and the sum thereof may also be 100 or 1000 according to the actual situation.
And finally, calculating and accumulating the products of the distance weights between all normal pixel points and the point to be repaired in the neighborhood window and the reference value, and taking the accumulated value as the pixel value of the point to be repaired.
In step S4, the supplemented fundus image is subjected to image classification by a previously trained fundus image classification model, and the obtained image classification result is used as a basis for a doctor to determine a fundus disease.
The invention adopts an improved convolution residual error network to train the eyeground image classification model. The network structure of the convolutional residual network is shown in fig. 3, and includes five sequentially connected residual structure blocks. As shown in fig. 4, each residual error structure block includes 3 × 3 convolutional layers and one 1 × 1 convolutional layer, and the input image is input to the residual error structure block, sequentially subjected to feature convolution extraction after the 3 × 3 convolutional layers and the 1 × 1 convolutional layers, and output.
The method comprises the steps that a first residual error structure block in a convolution residual error network inputs a fundus image, the output of the first residual error structure block is connected with the input of a second residual error structure block after down sampling (maximum pooling operation), the output of the second residual error structure block is connected with the input of a third residual error structure block after down sampling, the output of the third residual error structure block is connected with the input of a fourth residual error structure block after up sampling, the output of the fourth residual error structure block is connected with the input of a fifth residual error structure block after up sampling, and the output of the fifth residual error structure block outputs a classification prediction result of the input fundus image through classification prediction of a softmax classifier.
In order to improve the model performance of the image classification model, as shown in fig. 3, preferably, the input of the latter convolutional layer in each residual structure block is the accumulated result of the output of the last convolutional layer and the input of the own residual structure block.
In order to increase the model training speed and improve the accuracy of the model in classifying the fundus images, as shown in fig. 5, each residual structure block preferably further includes a batch normalization layer BN for performing batch normalization processing on the feature map input to each residual structure block to increase the classification speed of the model, and a prilu function activation layer connected to the batch normalization layer. The PReLU function activation layer is used for improving the capability of the model for mapping the input image into the nonlinear feature map so as to improve the prediction performance of the model. The image input into the residual structure block is subjected to batch normalization processing of a batch normalization layer, nonlinear activation of a PReLU function activation layer, and characteristic convolution extraction of 3 convolution layers of 3 x 3 convolution layers and 1 x 1 convolution layers, and then output.
The specific method and process for training the fundus image classification model of the invention are explained as follows:
as shown in fig. 2, the method for training a fundus image classification model according to the present invention specifically includes:
in step S41, a fundus image is acquired from the fundus image database as a sample image for model training.
Step S42, the sample image is clipped to an image block to expand the number of samples. In the step, considering that the number of samples of the incomplete fundus image may be limited, and the trained model is often not high in precision when the number is insufficient, in order to solve the problem, the sample image is cut into image blocks, for example, a palm image is cut into 5 finger images, the overall image features of the palm image may represent a specific person, and the finger vein features of the 5 fingers may also represent the specific person. Therefore, one palm image is expanded to 5 images, and the trained model has higher recognition performance.
Step S43, image preprocessing is performed on the image block to enhance the contrast on the sample image, so as to enhance the blood vessel characteristics on the image.
And step S44, training and forming a fundus image classification initial model by using the enhanced image blocks and the original sample images as training samples through an improved convolution residual error network.
And step S45, performing performance evaluation on the fundus image classification initial model, and adjusting model training parameters according to the evaluation result. The model prediction result is compared with the real result to estimate the loss, the model parameters are adjusted according to the loss, the specific adjustment can be made according to the actual situation, and the adjusted model parameters can enable the prediction result to be further close to the real result.
And step S46, updating the training fundus image classification initial model according to the adjusted model training parameters, and finally training to form a fundus image classification model. In this embodiment, the output of the model reaches the preset success rate of 90%, that is, the training is considered to be completed, and the actual operation can be adjusted to the required specific success rate.
In step S43, the image preprocessing performed on the image block includes performing morphological top-bottom hat transformation on the green channel image of the image block to enhance the image contrast of the image block and highlight the blood vessel features. The method for performing morphological high-low cap transform on the image block comprises the following steps:
and performing top-hat transformation and low-hat transformation on the image block, and subtracting the image subjected to the top-hat transformation and the image subjected to the low-hat transformation to enhance the contrast of the image block.
In order to highlight the blood vessel features, it is more preferable that the image preprocessing on the image block further includes performing image enhancement on the image block by using a contrast-limited adaptive histogram equalization method in step S43.
In step S45, the evaluation indexes for evaluating the performance of the fundus image classification initial model include an accuracy index for evaluating the accuracy of model prediction, a sensitivity index for evaluating the sensitivity of model prediction, a specificity index for evaluating the specificity of model prediction fundus image, and an AUC (area size under ROC curve) index for evaluating the performance of model segmentation.
Wherein the accuracy index is calculated by the following formula (1):
Figure BDA0002875702500000081
in formula (1), a represents the accuracy index.
M represents a blood vessel point on the fundus image with the correct image classification initial model classification.
N represents a background point on the fundus image in which the image classification initial model classification is erroneous.
The sensitivity index is calculated by the following formula (2):
Figure BDA0002875702500000082
in the formula (2), B represents the sensitivity index.
P represents a background point on the fundus image for which the image classification initial model classification is correct.
Q represents a blood vessel point on the fundus image for which the image classification initial model classification is erroneous.
The specificity index is calculated by the following formula (3):
Figure BDA0002875702500000083
in formula (3), C represents the specificity index.
The AUC indicator is a common indicator for model evaluation, so the calculation process of the AUC indicator and the plotting process of the ROC curve are not described here.
The present invention also provides a fundus image complementing and classifying system, which can implement the above-mentioned fundus image complementing and classifying method, as shown in fig. 6, the system includes:
and the noise area detection module is used for detecting image noise on the fundus image and marking a noise area.
And the blood vessel direction characteristic extraction module is used for extracting the blood vessel direction characteristics in the fundus image.
And the image completion module is used for completing the noise area by taking the extracted blood vessel direction characteristics as constraints.
And the image classification module is used for carrying out image classification on the supplemented fundus images through a pre-trained fundus image classification model and outputting an image classification result.
As shown in fig. 7, the image classification module specifically includes:
and the training sample acquisition unit is used for acquiring the fundus image from the fundus image database as a sample image for model training.
And the sample amplification unit is connected with the training sample acquisition unit and is used for cutting the sample image into image blocks so as to amplify the number of the samples.
And the image preprocessing unit is connected with the sample amplification unit and is used for preprocessing the image of the image block so as to enhance the blood vessel characteristics on the sample image.
And the fundus image classification model training unit is connected with the image preprocessing unit and the training sample acquisition unit and is used for training to form a fundus image classification initial model by taking the enhanced image blocks and the original sample image as training samples through an improved convolution residual error network.
And the model performance evaluation unit is connected with the eye fundus image classification model training unit and is used for carrying out performance evaluation on the eye fundus image classification initial model to obtain a model performance evaluation result.
And the model training parameter adjusting unit is connected with the model performance evaluating unit and is used for adjusting the model training parameters according to the evaluation result.
The eye ground image classification model training unit is also connected with the model training parameter adjusting unit and is also used for updating the training eye ground image classification initial model according to the adjusted model training parameters and finally training to form an eye ground image classification model.
The invention realizes the image completion of the damaged fundus image, and the fundus image classification model obtained by the improved convolution residual error network training has higher image classification speed and classification accuracy, and is more suitable for the identification, extraction and classification of the fine blood vessel characteristics.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.

Claims (11)

1. A fundus image complementing and classifying method is characterized by comprising the following steps:
step S1, carrying out noise detection on the fundus image and marking a noise area;
step S2, extracting blood vessel direction characteristics representing the trend of blood vessels in the fundus image;
step S3, the noise area is completed by taking the extracted direction characteristics of the blood vessels as constraint;
and step S4, carrying out image classification on the supplemented fundus images through a pre-trained fundus image classification model, and outputting the identification and classification results of the fundus images.
2. The method according to claim 1, wherein the step S3 comprises:
extracting main direction features of a blood vessel as a reference sequence according to a pixel point to be repaired, namely a blood vessel direction feature map in a neighborhood window of a noise region, and calculating an average value of gray values of all normal pixel points in the reference sequence as a reference value of noise repair;
calculating the distance weight between all normal pixel points and points to be repaired in the neighborhood window;
and calculating and accumulating the products of the distance weights between all normal pixel points and the point to be repaired in the neighborhood window and the reference value, and taking the accumulated value as the pixel value of the point to be repaired.
3. The method according to claim 1, wherein in step S4, the method of pre-training the fundus image classification model includes:
step S41 of acquiring a fundus image from the fundus image database as a sample image for model training;
step S42, cutting the sample image into image blocks to increase the number of samples;
step S43, image preprocessing is carried out on the image blocks to enhance the blood vessel characteristics;
step S44, training and forming a fundus image classification initial model by using the enhanced image blocks and the original sample images as training samples through an improved convolution residual error network;
step S45, performing performance evaluation on the fundus image classification initial model, and adjusting model training parameters according to the evaluation result;
and step S46, updating and training the fundus image classification initial model according to the adjusted model training parameters and detecting the model prediction success rate, finishing the fundus image classification model training when the power reaches a preset value, and otherwise, returning to the step S41 for circular execution.
4. The method according to claim 3, wherein in the step S43, the image preprocessing the image block comprises performing morphological top-and-bottom-hat transformation on a green channel image of the image block to enhance the image contrast of the image block;
the method for performing morphological high-low cap transform on the image block comprises the following steps:
and performing top-hat transformation and low-hat transformation on the image block, and subtracting the image subjected to the top-hat transformation and the image subjected to the low-hat transformation to enhance the contrast of the image block.
5. The method according to claim 3 or 4, wherein in step S43, the image preprocessing of the image block comprises image enhancement of the image block by using a contrast-limited adaptive histogram equalization method.
6. The method according to claim 3, wherein in step S44, the convolutional residual error network comprises five sequentially connected residual error structure blocks, each of the residual error structure blocks comprises 3 × 3 convolutional layers and one 1 × 1 convolutional layer, and the input image is sequentially output after feature convolutional extraction of the 3 × 3 convolutional layers and the 1 × 1 convolutional layers after being input into the residual error structure block;
the input of the latter convolution layer in each residual error structure block is the accumulation result of the output of the last convolution layer and the input of the residual error structure block;
the fundus image is input through a first residual error structure block in the convolution residual error network, the output of the first residual error structure block is connected with the input of a second residual error structure block after being subjected to down-sampling, the output of the second residual error structure block is connected with the input of a third residual error structure block after being subjected to down-sampling, the output of the third residual error structure block is connected with the input of a fourth residual error structure block after being subjected to up-sampling, the output of the fourth residual error structure block is connected with the input of a fifth residual error structure block after being subjected to up-sampling, and the output of the fifth residual error structure block outputs the classification prediction result of the fundus image which is input through the classification prediction of a softmax classifier.
7. The method as claimed in claim 6, wherein each of the residual structure blocks further includes a batch normalization layer and a PReLU function activation layer connected to the batch normalization layer, and the image inputted into the residual structure block is firstly subjected to batch normalization processing by the batch normalization layer, then subjected to nonlinear activation of the PReLU function by the PReLU function activation layer, and finally outputted after feature convolution extraction by 3 x 3 convolution layers and 1 x 1 convolution layer.
8. The method according to claim 3, wherein in the step S45, the evaluation index for evaluating the fundus image classification initial model performance includes at least one of an accuracy index for evaluating model prediction accuracy, a sensitivity index for evaluating model prediction sensitivity, a specificity index for evaluating model prediction fundus image specificity, and an AUC index for evaluating model segmentation performance.
9. The method of claim 8, wherein the accuracy index is calculated by the following formula:
Figure FDA0002875702490000021
wherein A represents the accuracy index;
m represents a blood vessel point on the fundus image with correct image classification initial model classification;
n represents a background point on the fundus image where the image classification initial model is wrongly classified;
the sensitivity index is calculated by the following formula:
Figure FDA0002875702490000031
wherein B represents the sensitivity index;
p represents a background point on the fundus image with correct image classification initial model classification;
q represents a blood vessel point on the fundus image for which the image classification initial model classification is erroneous;
the specificity index is calculated by the following formula:
Figure FDA0002875702490000032
wherein C represents the specificity index.
10. A fundus image completion and classification system for implementing the method according to any one of claims 1 to 9, comprising:
the noise area detection module is used for detecting image noise on the fundus image and marking a noise area;
the blood vessel direction characteristic extraction module is used for extracting blood vessel direction characteristics in the fundus image;
the image completion module is used for completing the noise area by taking the extracted blood vessel direction characteristics as constraints;
and the image classification module is used for carrying out image classification on the supplemented fundus images through a pre-trained fundus image classification model and outputting an image classification result.
11. The system of claim 10, wherein the image classification module comprises:
the training sample acquisition unit is used for acquiring an eye fundus image from an eye fundus image database as a sample image for model training;
the sample amplification unit is connected with the training sample acquisition unit and is used for cutting the sample image into image blocks so as to amplify the number of samples;
the image preprocessing unit is connected with the sample amplification unit and used for preprocessing the image blocks so as to enhance the blood vessel characteristics on the sample image;
the fundus image classification model training unit is connected with the image preprocessing unit and the training sample acquisition unit and is used for training to form a fundus image classification initial model by taking the enhanced image blocks and the original sample image as training samples through an improved convolution residual error network;
the model performance evaluation unit is connected with the eye fundus image classification model training unit and used for carrying out performance evaluation on the eye fundus image classification initial model to obtain a model performance evaluation result;
the model training parameter adjusting unit is connected with the model performance evaluating unit and used for adjusting model training parameters according to the evaluation result;
the eye fundus image classification model training unit is connected with the model training parameter adjusting unit and is also used for updating and training the eye fundus image classification initial model according to the adjusted model training parameters and finally training to form the eye fundus image classification model.
CN202011644263.8A 2020-12-31 2020-12-31 Eye fundus image complementing and classifying method and system Pending CN112700420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011644263.8A CN112700420A (en) 2020-12-31 2020-12-31 Eye fundus image complementing and classifying method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011644263.8A CN112700420A (en) 2020-12-31 2020-12-31 Eye fundus image complementing and classifying method and system

Publications (1)

Publication Number Publication Date
CN112700420A true CN112700420A (en) 2021-04-23

Family

ID=75514302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011644263.8A Pending CN112700420A (en) 2020-12-31 2020-12-31 Eye fundus image complementing and classifying method and system

Country Status (1)

Country Link
CN (1) CN112700420A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934755A (en) * 2023-09-18 2023-10-24 中国人民解放军总医院第八医学中心 Pulmonary tuberculosis CT image enhancement system based on histogram equalization
CN117274148A (en) * 2022-12-05 2023-12-22 魅杰光电科技(上海)有限公司 Unsupervised wafer defect detection method based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318565A (en) * 2014-10-24 2015-01-28 中南大学 Interactive method for retinal vessel segmentation based on bidirectional region growing of constant-gradient distance
CN104835157A (en) * 2015-05-04 2015-08-12 北京工业大学 Eye fundus image optical cup automatic segmentation method based on improved PDE image repairing
CN105761258A (en) * 2016-02-06 2016-07-13 上海市第人民医院 Retinal fundus image bleeding detection method
CN108665474A (en) * 2017-03-31 2018-10-16 中南大学 A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on B-COSFIRE
RU2683758C1 (en) * 2018-06-27 2019-04-01 Федеральное государственное учреждение "Федеральный исследовательский центр "Информатика и управление" Российской академии наук" (ФИЦ ИУ РАН) Automated analysis system for angiographic images of human eyeground
CN111754481A (en) * 2020-06-23 2020-10-09 北京百度网讯科技有限公司 Fundus image recognition method, device, equipment and storage medium
CN111862056A (en) * 2020-07-23 2020-10-30 东莞理工学院 Retinal vessel image segmentation method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318565A (en) * 2014-10-24 2015-01-28 中南大学 Interactive method for retinal vessel segmentation based on bidirectional region growing of constant-gradient distance
CN104835157A (en) * 2015-05-04 2015-08-12 北京工业大学 Eye fundus image optical cup automatic segmentation method based on improved PDE image repairing
CN105761258A (en) * 2016-02-06 2016-07-13 上海市第人民医院 Retinal fundus image bleeding detection method
CN108665474A (en) * 2017-03-31 2018-10-16 中南大学 A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on B-COSFIRE
RU2683758C1 (en) * 2018-06-27 2019-04-01 Федеральное государственное учреждение "Федеральный исследовательский центр "Информатика и управление" Российской академии наук" (ФИЦ ИУ РАН) Automated analysis system for angiographic images of human eyeground
CN111754481A (en) * 2020-06-23 2020-10-09 北京百度网讯科技有限公司 Fundus image recognition method, device, equipment and storage medium
CN111862056A (en) * 2020-07-23 2020-10-30 东莞理工学院 Retinal vessel image segmentation method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MENGXING GONG;YIJUN WANG: ""An Feature Image Generation Based on Adversarial Generation Network"", 《INTERNATIONAL CONFERENCE ON MEASURING TECHNOLOGY AND MECHATRONICS AUTOMATION (ICMTMA)》, 30 March 2020 (2020-03-30) *
李媛媛;蔡轶珩;高旭蓉;: "基于融合相位特征的视网膜血管分割算法", 计算机应用, no. 07, 20 March 2018 (2018-03-20) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274148A (en) * 2022-12-05 2023-12-22 魅杰光电科技(上海)有限公司 Unsupervised wafer defect detection method based on deep learning
CN116934755A (en) * 2023-09-18 2023-10-24 中国人民解放军总医院第八医学中心 Pulmonary tuberculosis CT image enhancement system based on histogram equalization
CN116934755B (en) * 2023-09-18 2023-12-01 中国人民解放军总医院第八医学中心 Pulmonary tuberculosis CT image enhancement system based on histogram equalization

Similar Documents

Publication Publication Date Title
CN109800824B (en) Pipeline defect identification method based on computer vision and machine learning
CN109671094B (en) Fundus image blood vessel segmentation method based on frequency domain classification
CN112862808A (en) Deep learning-based interpretability identification method of breast cancer ultrasonic image
CN110956092B (en) Intelligent metallographic detection rating method and system based on deep learning
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
CN109360179B (en) Image fusion method and device and readable storage medium
CN107563364A (en) The discriminating conduct of the fingerprint true and false and fingerprint identification method based on sweat gland
CN110555380A (en) Finger vein identification method based on Center Loss function
CN111368825A (en) Pointer positioning method based on semantic segmentation
CN109003275A (en) The dividing method of weld defect image
CN111401145A (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN112883824A (en) Finger vein feature recognition device for intelligent blood sampling and recognition method thereof
CN111210417B (en) Cloth defect detection method based on convolutional neural network
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN105354547A (en) Pedestrian detection method in combination of texture and color features
CN105718896A (en) Intelligent robot with target recognition function
CN112700420A (en) Eye fundus image complementing and classifying method and system
CN113191352A (en) Water meter pointer reading identification method based on target detection and binary image detection
CN111539931A (en) Appearance abnormity detection method based on convolutional neural network and boundary limit optimization
CN116824241A (en) Iterative learning-based potato disease multi-classification and detection method
CN112017165A (en) Lacrimal river height detection method based on deep learning
CN111223113A (en) Nuclear magnetic resonance hippocampus segmentation algorithm based on dual dense context-aware network
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
CN114067159A (en) EUS-based fine-granularity classification method for submucosal tumors
CN105809187A (en) Multi-manufacturer partial discharge data result diagnosis analysis method based on image identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination