CN114663421B - Retina image analysis system and method based on information migration and ordered classification - Google Patents

Retina image analysis system and method based on information migration and ordered classification Download PDF

Info

Publication number
CN114663421B
CN114663421B CN202210367584.0A CN202210367584A CN114663421B CN 114663421 B CN114663421 B CN 114663421B CN 202210367584 A CN202210367584 A CN 202210367584A CN 114663421 B CN114663421 B CN 114663421B
Authority
CN
China
Prior art keywords
image
network
bionic
blood vessel
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210367584.0A
Other languages
Chinese (zh)
Other versions
CN114663421A (en
Inventor
姚新明
陈洁
张靖
赵腾飞
何俊俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Affiliated Hospital of Wannan Medical College
Original Assignee
First Affiliated Hospital of Wannan Medical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Affiliated Hospital of Wannan Medical College filed Critical First Affiliated Hospital of Wannan Medical College
Priority to CN202210367584.0A priority Critical patent/CN114663421B/en
Publication of CN114663421A publication Critical patent/CN114663421A/en
Application granted granted Critical
Publication of CN114663421B publication Critical patent/CN114663421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides an intelligent retina image analysis system based on information migration and ordered classification, which comprises: the system comprises a fundus image preprocessing module, a fundus image blood vessel segmentation network module based on information migration and an intelligent analysis prediction module based on ordered classification, wherein the image preprocessing module is based on a visual image enhancement algorithm, and the image quality is effectively enhanced by the cooperation of multiple algorithms; fundus image blood vessel segmentation network based on information migration effectively realizes retinal blood vessel segmentation; an intelligent analysis and prediction module combined with ordered classification accurately predicts the retina state change level; the intelligent analysis of the retina image is effectively and quickly realized.

Description

Retina image analysis system and method based on information migration and ordered classification
Technical Field
The invention belongs to the field of artificial intelligent image processing, and particularly relates to an intelligent retina image analysis system based on information migration and ordered classification.
Background
The fundus image is a fundus structure photographed by a fundus camera, and mainly includes retina, choroid, macula, optic nerve, and the like. Retinal image acquisition refers to the process of acquiring retinal image data using acquisition equipment, and involves problems with imaging equipment, imaging systems, and the like. The acquired retinal fundus image is generally classified into two types according to the difference of imaging principles: fluorescent fundus images and conventional source fundus images.
In the process of collecting retina images, the read retina images are affected by light brightness, distance, angle of collected images, inherent phase difference of human eyes and other reasons, the usually collected retina images have a large amount of noise, part of the retina images also comprise focuses, the contrast of image pixels is low, local illumination is uneven, blood vessels are distributed in a complex and dense manner, various texture backgrounds exist, and the image quality is low.
In the image processing process of the prior art, the image processing is generally carried out based on a neural network, and the retinal blood vessel segmentation of the current fundus image is inaccurate due to the quality of fundus image acquisition and imaging and the lack of universality of a conventional algorithm, so that the constraint of practical application of fundus image intelligent analysis is increased.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an intelligent retinal image analysis system based on information migration and ordered classification, which is used for carrying out image preprocessing enhancement, then carrying out retinal vessel segmentation based on an information migration fundus image segmentation network, and finally carrying out intelligent analysis and prediction by using a neural network combined with ordered classification so as to achieve the purpose of intelligent retinal image analysis.
The technical scheme of the application is as follows:
an intelligent retinal image analysis system based on information migration and ordered classification comprises a fundus image preprocessing module, a fundus image blood vessel segmentation network module and a fundus image intelligent analysis prediction module;
the fundus image preprocessing module performs image preprocessing on the acquired fundus image, and aims to improve image quality and highlight useful characteristics;
the fundus image blood vessel segmentation network module performs blood vessel segmentation on the preprocessed fundus image based on information migration to obtain a blood vessel image, wherein a guiding network and a bionic network are defined in the fundus image blood vessel segmentation network module;
the fundus image intelligent analysis and prediction module predicts state change of the blood vessel image based on ordered classification.
The fundus image preprocessing module comprises an image enhancer module, a mask generation sub-module and a multi-scale linear filter sub-module;
the image enhancer module performs dual-tree complex wavelet and improved top-hat conversion on the acquired original fundus image to obtain a fundus image after image enhancement; the image enhancer module effectively improves the retina image quality, improves the blood vessel contrast, and highlights characteristic elements (blood vessel structure, optic disc and macula);
The dual-tree complex wavelet transform (DT-CWT) overcomes the defect of the conventional discrete wavelet transform, and when the corresponding wavelet basis approximately meets the Hilbert transform relationship, the dual-tree complex wavelet transform can reduce the translation sensitivity in the conventional real wavelet transform, improve the direction selectivity, and retain the detail information while effectively improving the image quality; after performing dual-tree complex wavelet transformation on an original fundus image, performing top-hat transformation after improvement on the fundus image after complex wavelet transformation;
the improved top-hat transformation specifically comprises the following steps: performing open operation on the fundus image subjected to the double-tree complex wavelet transformation to obtain an open operation fundus image, wherein when the open operation fundus image is subtracted from the transformed fundus image, data with a gray value transformed are kept unchanged, and unchanged data are subtraction results;
the improved top-hat conversion is that when the fundus image after complex wavelet conversion subtracts the open operation fundus image, the data of gray value conversion is kept unchanged, the unchanged data is the subtraction result, the gray value difference of the image is obviously increased through the improved top-hat conversion, and some edges with small amplitude variation can be effectively protected;
the mask generation submodule performs view field extraction based on space brightness information, converts an eyeground image (a color image, an image after partial detail enhancement and image quality improvement by complex wavelet transformation and top-hat transformation) after image enhancement from an RGB format to a YIQ format, sets a segmentation threshold, extracts a surrounding black view field, obtains a useful information area through corrosion operation, realizes separation of the eyeground area and a background area, obtains a mask image, and multiplies the eyeground image after image enhancement by the mask image to obtain an eyeground area image, and comprises the following steps:
The first step: converting the fundus image after image enhancement from RGB format to YIQ format:
Figure GDA0004137946400000031
the above formula (1) obtains 3 components of the fundus image in YIQ format;
and a second step of: setting a segmentation threshold value, extracting a surrounding black view field, and obtaining a region of interest through corrosion operation;
the mask image is obtained as follows:
Figure GDA0004137946400000032
wherein, "1" represents a background side block diagram, and "0" represents an eyeball blood vessel; y is the brightness information of the image, is equal to the gray value of the brightness component of the image (Y component of the fundus image in YIQ format), M (x ', Y) is the extracted background frame, and x', Y represent the pixel coordinates;
the multi-scale linear filter submodule is based on a multi-scale linear filter of a Hessian matrix, parameters are set according to different gray values and characteristic values of blood vessels in fundus area images, noise is eliminated after filtering, blood vessel characteristics are further highlighted, and a filtered blood vessel characteristic image is obtained.
The fundus image blood vessel segmentation network module training process comprises the following steps:
step 201, using the filtered blood vessel feature image and the segmentation label as common inputs of a guidance network and a bionic network, training the guidance network to perform a blood vessel segmentation task, performing a segmentation accuracy test on a verification set after every A iterations in the iterative training process of the guidance network, storing weights with segmentation accuracy greater than a set segmentation threshold, and selecting the weight with the highest segmentation accuracy as the optimal weight of the guidance network after the iterative training is completed;
Step 202, in the process of iterative training of a bionic network, a guiding network loads the optimal weight of the guiding network stored in 201, generates a guiding encoding and decoding matrix and guiding residual error similarity for the filtered blood vessel characteristic image, generates a corresponding bionic encoding and decoding matrix and a corresponding bionic similarity matrix for the filtered blood vessel characteristic image, and fits and partitions labels, the guiding encoding and decoding matrix and guiding residual error similarity by the bionic network, and uses a loss function as a constraint condition in the fitting process;
and 203, updating the bionic coding and decoding matrix, the bionic residual similarity parameter and the segmentation network parameter of the bionic network in the direction of the reduced loss function value through a back propagation method, jumping to step 202 to perform iterative training, performing segmentation accuracy test on the verification set every A iterations, storing the weight with the accuracy larger than a set segmentation threshold value, and selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network after the iterative training is completed.
Step 202, guiding the encoding and decoding matrix and the bionic encoding and decoding matrix to G E R m×n′ ,G∈R m×n′ By generating the characteristic diagrams of the guiding/bionic encoder and the guiding/bionic decoder, the output characteristic diagram of the encoder layer is F 1 ∈R h×w×m Wherein h, w, m are each F 1 The height, width and channel number of the feature map, the output feature map of the decoder layer is F 2 ∈R h×w×n′ Wherein h, w, n' each represent F 2 The feature map has high, wide and channel number, guide encoding and decoding matrix and bionic encoding and decoding matrix G E R m×n′ The calculation is as follows:
Figure GDA0004137946400000041
wherein s=1,..h, t=1,..w, x and W represent the weights of the input image and the guidance/biomimetic network, respectively; g a,b (x; W) represents the a-th row and b-th column of the guided codec matrix or the bionic codec matrix.
In step 202, residual similarity is extracted by a multi-scale residual similarity collecting module, and the multi-scale residual similarity collecting module collects context information by adopting a similar volume, and specifically comprises the following steps:
for the ith feature vector Y (i) Through each central pixel P center Adjacent to it, the pixel P in d x d region j Element multiplication between them calculates a similarity value P j ' the formula is:
P j ′=P j ×P center (4)
where j represents the coordinates of the d region, obtaining a local representation for each pixel in the image-filtered vessel feature image, and then mapping the local table along the channel dimensionThe connection is shown to obtain the residual similarity of the ith feature vector
Figure GDA0004137946400000051
Wherein d represents the size of the custom region, H and W' represent the height and width of the feature vector, respectively, and the corresponding residual similarity is obtained >
Figure GDA0004137946400000052
Residual similarity is then added->
Figure GDA0004137946400000053
Adding and summing, the final residual similarity of the ith feature vector is +.>
Figure GDA0004137946400000054
The pilot codec matrix generated by the pilot network is
Figure GDA0004137946400000055
The pilot residual similarity is +.>
Figure GDA0004137946400000056
Bionic coding/decoding matrix generated by bionic network>
Figure GDA0004137946400000057
The similarity of bionic residual errors generated by the bionic network is equal to sum +.>
Figure GDA0004137946400000058
i=1.. n; the loss function of the information migration task is as follows:
Figure GDA0004137946400000059
wherein W is t To guide network weights, W s Bionic network weight, G i T A coding and decoding matrix representing the ith eigenvector of the guided network, G i S Represents the ith special of the bionic networkThe encoding and decoding matrixes of the eigenvectors, wherein n represents the number of the eigenvectors; lambda (lambda) i And beta i Representing the weight factor of the corresponding loss term, N represents the number of data points.
The fundus image intelligent analysis prediction module based on ordered classification predicts the state change of the segmented blood vessel image, and sequentially segments the ordered multi-classification problem into a plurality of classification problems;
the loss function of the ordered classification is:
Figure GDA00041379464000000510
(7)
where N represents the number of data points, T represents a binary classification task, gamma t Representing the weight of the t-th binary classification task,
Figure GDA0004137946400000061
representing the output of the c-th sample relative to the t-th binary classification task, +.>
Figure GDA0004137946400000062
True tag of the t-th binary classification task representing the c-th sample,/for the c-th binary classification task >
Figure GDA0004137946400000063
Weight, W, representing the c sample in the t-th binary classification task t Parameters representing the t-th binary classification task classifier, x c Represents the c-th input vector,>
Figure GDA0004137946400000064
representing a probabilistic model.
An intelligent retina image analysis method based on information migration and ordered classification comprises the following steps:
step 1, performing image preprocessing on an acquired fundus image, wherein the purpose is to improve the image quality and highlight useful characteristics;
step 2, performing fundus image blood vessel segmentation based on information migration; performing blood vessel segmentation on the preprocessed fundus image to obtain a blood vessel image;
and 3, predicting the state change of the blood vessel image based on ordered classification, wherein the embodiment accurately realizes the stage prediction of the fundus image lesion.
The step 1 specifically comprises the following steps:
s101, performing dual-tree complex wavelet and improved top-hat transformation on an acquired original fundus image to obtain a fundus image after image enhancement; step S101, the retinal image quality is effectively improved, the blood vessel contrast is improved, and characteristic elements (blood vessel structures, optic discs and macula lutea) are highlighted;
the improved top-hat transformation specifically comprises the following steps: performing open operation on the fundus image subjected to the double-tree complex wavelet transformation to obtain an open operation fundus image, wherein when the open operation fundus image is subtracted from the transformed fundus image, data with a gray value transformed are kept unchanged, and unchanged data are subtraction results
S102, performing view field extraction based on space brightness information, converting an eyeground image after image enhancement from an RGB format to a YIQ format, setting a segmentation threshold value, extracting a surrounding black view field, obtaining a useful information area through corrosion operation, realizing separation of the eyeground area and a background area, obtaining a mask image, and multiplying the mask image and the eyeground image after image enhancement to obtain an eyeground area image;
s103, a multiscale linear filter based on a Hessian matrix is used for setting parameters according to different gray values and characteristic values of blood vessels in the fundus region image, noise is eliminated after filtering, blood vessel characteristics are further highlighted, and a filtered blood vessel characteristic image is obtained.
The step 2 specifically comprises the following steps:
step 201, using the filtered blood vessel feature image and the segmentation label as common inputs of a guidance network and a bionic network, training the guidance network to perform a blood vessel segmentation task, performing a segmentation accuracy test on a verification set after every A iterations in the iterative training process of the guidance network, storing weights with segmentation accuracy greater than a set segmentation threshold, and selecting the weight with the highest segmentation accuracy as the optimal weight of the guidance network after the iterative training is completed;
Step 202, in the process of iterative training of a bionic network, the guiding network loads the optimal weight of the guiding network stored in step 201, generates a guiding encoding and decoding matrix and guiding residual error similarity for the filtered blood vessel characteristic image, generates a corresponding bionic encoding and decoding matrix and a corresponding bionic similarity matrix for the filtered blood vessel characteristic image, and fits and partitions the label, the guiding encoding and decoding matrix and the guiding residual error similarity by the bionic network, and uses a loss function as a constraint condition in the fitting process;
step 203, updating the bionic coding and decoding matrix, the bionic residual similarity parameter and the segmentation network parameter of the bionic network in the direction of the decreasing loss function value through a counter propagation method, jumping to step 202 to perform iterative training, performing segmentation accuracy test on the verification set every A iterations, storing weights with the accuracy larger than a set segmentation threshold, and selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network after the iterative training is completed;
step 202, guiding the encoding and decoding matrix and the bionic encoding and decoding matrix to G E R m×n′ ,G∈R m×n′ By generating the characteristic diagrams of the guiding/bionic encoder and the guiding/bionic decoder, the output characteristic diagram of the encoder layer is F 1 ∈R h×w×m Wherein h, w, m are each F 1 The height, width and channel number of the feature map, the output feature map of the decoder layer is F 2 ∈R h×w×n′ Wherein h, w, n' each represent F 2 The feature map has high, wide and channel number, guide encoding and decoding matrix and bionic encoding and decoding matrix G E R m×n′ The calculation is as follows:
Figure GDA0004137946400000081
wherein s=1,..h, t=1,..w, x and W represent the weights of the input image and the guidance/biomimetic network, respectively; g a,b (x; W) represents the a-th row and b-th column of the guided codec matrix or the biomimetic codec matrix;
in step 202, residual similarity is extracted by a multi-scale residual similarity (MRS) collecting module, where the multi-scale residual similarity (MRS) collecting module collects context information by using a similar volume, and specifically includes the following steps:
for the ith feature vector Y (i) Through each central pixel P center Adjacent to it, the pixel P in d x d region j Element multiplication between them calculates a similarity value P j ' the formula is:
P j ′=P j ×P center (4)
wherein j represents the coordinates of the d×d region, obtaining a local representation for each pixel in the image-filtered vessel feature image, and then connecting the local representations along the channel dimension to obtain the residual similarity of the ith feature vector
Figure GDA0004137946400000082
Wherein d represents the size of the custom region, H and W' represent the height and width of the feature vector, respectively, and the corresponding residual similarity is obtained >
Figure GDA0004137946400000083
Residual similarity +.>
Figure GDA0004137946400000084
Adding and summing, the final residual similarity of the ith feature vector is +.>
Figure GDA0004137946400000085
The pilot codec matrix generated by the pilot network is
Figure GDA0004137946400000086
The guided residual similarity is
Figure GDA0004137946400000087
Bionic coding/decoding matrix generated by bionic network>
Figure GDA0004137946400000088
The similarity of bionic residual errors generated by the bionic network is equal to sum +.>
Figure GDA0004137946400000089
The loss function of the information migration task is:
Figure GDA00041379464000000810
wherein W is t To guide network weights, W s Bionic network weight, G i T A coding and decoding matrix representing the ith eigenvector of the guided network, G i S The coding and decoding matrix of the ith feature vector of the bionic network is represented, and n represents the number of the feature vectors (the number of the coding and decoding matrices); lambda (lambda) i And beta i A weight factor representing the corresponding loss term of the ith feature vector, and N represents the number of data points;
s3, predicting state change of the segmented blood vessel image, and sequentially segmenting the ordered multi-classification problem into a plurality of classification problems;
the loss function of the ordered classification is:
Figure GDA0004137946400000091
(7) Where N represents the number of data points, T represents a binary classification task, gamma t Representing the weight of the t-th binary classification task,
Figure GDA0004137946400000092
representing the output of the c-th sample relative to the t-th binary classification task, +.>
Figure GDA0004137946400000093
True tag of the t-th binary classification task representing the c-th sample,/for the c-th binary classification task >
Figure GDA0004137946400000094
Represents the t-th binaryWeight, W, of sample c in classification task t Representing the t-th binary classification task, x c Represents the c-th input vector,>
Figure GDA0004137946400000095
and the final classification result of the representation probability model is integrated and judged according to the classification result of each classification task.
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses an intelligent retinal image analysis system based on information migration and ordered classification, which is used for carrying out image preprocessing enhancement, then carrying out retinal blood vessel segmentation based on an information migration fundus image segmentation network, and finally carrying out intelligent analysis and prediction by using a neural network combined with ordered classification so as to realize intelligent retinal image analysis.
The visual image enhancement algorithm of the image preprocessing module effectively enhances the image quality by the cooperative work of multiple algorithms; fundus image blood vessel segmentation network based on information migration effectively realizes retinal blood vessel segmentation; the intelligent analysis prediction module combined with ordered classification accurately predicts, and intelligent analysis of retina images is effectively and rapidly realized.
The image fundus image preprocessing module comprises an image enhancer sub-module, a mask generation sub-module and a multi-scale linear filter sub-module, so that the contrast of blood vessels is improved, characteristic elements are highlighted, the defect of common discrete wavelet transformation is overcome, and detail information is reserved while the image quality is effectively improved; the difference of the gray level of the image is obviously increased, and some edges with small amplitude variation are effectively protected.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is a schematic diagram of the overall system architecture of the intelligent analysis system for retinal images based on information migration and ordered classification of the present invention;
fig. 2 is a schematic diagram of the overall structure of a fundus image vessel segmentation network module based on information migration in the present invention;
fig. 3 is a mask image of the present embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on embodiments of the present invention, are within the scope of the present invention.
The invention has the core ideas that according to the self characteristics of fundus images, a proper algorithm is designed to perform image preprocessing enhancement, then retinal blood vessel segmentation is performed based on an information migration fundus image segmentation network, and finally, intelligent analysis and prediction are performed by using a neural network combined with ordered classification, so that the purpose of intelligent analysis of retinal images is achieved.
The invention relates to an integral flow of an intelligent retinal image analysis system based on information migration and ordered classification, as shown in fig. 1, and the intelligent retinal image analysis system based on information migration and ordered classification comprises a fundus image preprocessing module, a fundus image blood vessel segmentation network module, a fundus image intelligent analysis prediction module and a foreground display module;
the fundus image preprocessing module performs image preprocessing on the acquired fundus image, and aims to improve image quality and highlight useful characteristics;
the fundus image blood vessel segmentation network module performs blood vessel segmentation on the preprocessed fundus image based on information migration to obtain a blood vessel image, wherein two networks are defined in the fundus image blood vessel segmentation network module: the fundus image blood vessel segmentation network module defines important information of the guide network and transfers the extracted knowledge information of the guide network to the bionic network;
the fundus image intelligent analysis prediction module predicts the state change of the blood vessel image based on ordered classification, in the embodiment, diabetes is an ordered process based on disease lesion deterioration, and the stage prediction of the state change of the fundus image is accurately realized from mild lesion to moderate to severe lesion;
And the foreground display module reads corresponding data from the database, displays the processing results of the algorithm submodules to the user and interacts with the user.
In the embodiment, firstly, data collection is carried out, 1000 cases of type 2 diabetics with definite clinic or hospitalization are selected from 1 st in 2020 to 12 nd in 2022, after exclusion and routine examination, a unified specially trained diabetes department nurse carries out fundus photography on the collected object by using a pupil expansion-free fundus camera, and a unique fundus image is obtained;
the fundus image preprocessing module comprises an image enhancer module, a mask generation sub-module and a multi-scale linear filter sub-module;
the image enhancer module performs dual-tree complex wavelet and improved top-hat conversion on the acquired original fundus image to obtain a fundus image after image enhancement; the image enhancer module effectively improves the retina image quality, improves the blood vessel contrast, and highlights characteristic elements (blood vessel structure, optic disc and macula);
the dual-tree complex wavelet transform (DT-CWT) is used for overcoming the defect of the conventional discrete wavelet transform, and when the corresponding wavelet basis approximately meets the Hilbert transform relation, the dual-tree complex wavelet transform can reduce the translation sensitivity in the conventional real wavelet transform, improve the direction selectivity, and retain the detail information while effectively improving the image quality; after performing dual-tree complex wavelet transformation on an original fundus image, performing top-hat transformation after improvement on the fundus image after complex wavelet transformation;
The improved top-hat transformation specifically comprises the following steps: performing open operation on the fundus image subjected to the double-tree complex wavelet transformation to obtain an open operation fundus image, wherein when the open operation fundus image is subtracted from the transformed fundus image, data with a gray value transformed are kept unchanged, and unchanged data are subtraction results
The morphological top-hat transformation is performed by a combination of open and close operations, which are recursively derived from expansion and erosion operations. The traditional top-hat transformation is to subtract the result of the original image after the original image is subjected to the open operation as a final result, but the final result is entirely dark, and some darker edges cannot be displayed. The improved top-hat conversion is that when the fundus image after complex wavelet conversion subtracts the open operation fundus image, the data of gray value conversion is kept unchanged, the unchanged data is the subtraction result, the gray value difference of the image is obviously increased through the improved top-hat conversion, and some edges with small amplitude variation can be effectively protected;
after the dual-tree complex wavelet transformation and the improved top-hat transformation, fundus images with enhanced local detail and improved image quality are obtained;
The mask generation submodule performs view field extraction based on space brightness information, converts an eyeground image (a color image, an image after local detail enhancement and image quality improvement through complex wavelet transformation and top-hat transformation) after image enhancement from an RGB format to a YIQ format, sets a segmentation threshold value, extracts a surrounding black view field, obtains a useful information area through corrosion operation, realizes separation of an eyeground area and a background area, obtains a mask image (shown in figure 3), and multiplies the eyeground image after image enhancement by the mask image to obtain an eyeground area image.
The fundus image analysis after image enhancement needs to acquire an ROI (region of interest), namely a fundus region image, so that the influence of pixels outside the ROI region can be effectively avoided in subsequent processing, the operation complexity is reduced, and the ROI and useless regions can be defined by a mask image;
the mask image is commonly referred to as a visual field, and the original fundus image is used to generate a mask image of the same size, so as to separate the fundus region from the background region. In order to accurately extract a mask image (fundus image field of view), the invention provides a field of view extraction method based on spatial brightness information. The view field extraction method based on the space brightness information separates brightness information and chromaticity information on the basis of the image in the YIQ format, and then extracts the surrounding black area through the selection of the segmentation threshold. Unlike the RGB three-color channel, in YIQ format, Y refers to the luminance information of an image, I refers to the color change from orange to cyan, and Q refers to the color change from purple to yellow-green. Since it contains dual information of brightness and color, the brightness component in the image is separated and extracted, which comprises the following steps:
The first step: converting the fundus image after image enhancement from RGB format to YIQ format:
Figure GDA0004137946400000131
the above formula (1) can obtain 3 components of the YIQ format image;
and a second step of: setting a segmentation threshold value, extracting a surrounding black view field, and obtaining a region of interest through corrosion operation;
the selection of the segmentation threshold is the basis of segmentation extraction, and for a single segmentation threshold, the gray values of the pixels are divided into two classes by the segmentation threshold, one class is that the gray values are larger than the segmentation threshold, and the other class is that the gray values are smaller than the segmentation threshold. The Y component refers to brightness information of an image, a Y component histogram of a fundus image in a YIQ format is in a double-peak state and comprises two parts of high pixels, the pixel values at the two ends are 0, and separation is carried out according to the characteristics of the Y component histogram;
through a series of experiments, in this embodiment, the segmentation threshold is set to 50 to perform extraction and separation of the black region, and a mask image is obtained:
Figure GDA0004137946400000132
wherein, "1" represents a background side block diagram, and "0" represents an eyeball blood vessel; y is the brightness information of the image, which is equal to the gray value of the brightness component of the image (Y component of the fundus image in YIQ format), M (x ', Y) is the extracted background frame, x ', Y represents the pixel coordinates, and if the Y component of the x ' th row and Y column pixels is greater than 50, the point is considered as the background area;
The multi-scale linear filter submodule is based on a multi-scale linear filter of a Hessian matrix, parameters are set according to different gray values and characteristic values of blood vessels in fundus area images, noise is eliminated after filtering, blood vessel characteristics are further highlighted, and a filtered blood vessel characteristic image is obtained;
the Hessian matrix realizes enhancement of a specific structure (point structure/line structure) in the image, thereby realizing extraction of target characteristics and removing other useless noise information;
in a two-dimensional image, the Hessian matrix is a two-dimensional positive definite matrix, with two eigenvalues and corresponding eigenvectors. The two eigenvalues represent the anisotropy of the image change in the direction in which the two eigenvectors refer, and if an ellipse is formed by the eigenvector and the eigenvalue, this ellipse is labeled with the anisotropy except for the image change. The dot structure in the image has isotropy, while the linear structure has anisotropy. The linear structure in the image of the eye bottom area, i.e. the vascular structure, is thus enhanced by the Hessian matrix characteristic construction filter, while punctiform structures, i.e. noise points, are filtered out.
As shown in fig. 2, the fundus image blood vessel segmentation network module performs blood vessel segmentation on the preprocessed fundus image filtered blood vessel feature image based on the information migration, and performs blood vessel segmentation based on the information migration, and the fundus image blood vessel segmentation network module includes two segmentation networks: a bootstrap network and a biomimetic network. The information migration refers to migration of knowledge information extracted by the pre-trained guided network model to the bionic network. The fundus image blood vessel segmentation network module training process comprises the following steps:
Step 201, using the filtered blood vessel feature image and the segmentation label as common inputs of a guide network and a bionic network, training the guide network to perform a blood vessel segmentation task, performing a segmentation accuracy test on a verification set after every 100 iterations in the process of iterative training of the guide network, saving the weight with the segmentation accuracy larger than a set segmentation threshold value, and selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network after the iterative training is completed (the optimal weight is saved in the process of iterative training, the network performs the test on the verification set after every 100 iterations, and the time with the highest segmentation accuracy is called as the optimal weight);
step 202, loading the optimal weight of the guide network stored in A10 by the guide network in the process of iterative training of the bionic network, generating a guide encoding and decoding matrix and guide residual similarity for the filtered blood vessel feature image, generating a corresponding bionic encoding and decoding matrix and a corresponding bionic similarity matrix for the filtered blood vessel feature image by the bionic network, and fitting and dividing labels, the guide encoding and decoding matrix and the guide residual similarity by the bionic network, wherein an L2 loss function is used as a constraint condition in the fitting process;
step 203, updating the bionic encoding and decoding matrix, the bionic residual error similarity parameter and the segmentation network parameter of the bionic network in the direction of the reduced loss function value by a counter propagation method, and jumping to 202 for iterative training, performing segmentation accuracy test on the verification set every 100 iterations, storing weights with the precision larger than a set segmentation threshold value, and selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network after the iterative training is completed.
In the process of coding and decoding of the neural network (the scheme for solving the problem by dividing the whole network is a coding-first-decoding process), the information flow comprises a method for solving the problem, and the method for solving the problem by using the network as knowledge information is transferred to the bionic network, so that the bionic network can be better generalized.
Step 202, guiding the encoding and decoding matrix and the bionic encoding and decoding matrix to G E R m×n′ ,G∈R m×n′ (m x n' matrix, G matrix is generated by two side feature graphs) is generated by the feature graphs of the guiding/bionic encoder and the guiding/bionic decoder, and the output feature graph of the encoder layer is F 1 ∈R h×w×m Wherein h, w, m are each F 1 High, wide and channel number of feature map (high, wide and channel number of feature map image of layer of guide network), output feature map of decoder layer is F 2 ∈R h×w×n′ Wherein h, w, n' each represent F 2 Feature map height, width and channel number, guided codecMatrix and bionic coding/decoding matrix G epsilon R m×n′ The calculation is as follows:
Figure GDA0004137946400000151
where s=1,..h, t=1,..w, x and W represent the weights of the input image and the guidance/biomimetic network (guidance network or biomimetic network), respectively; g a,b (x; W) represents the a-th row and b-th column of the guide codec matrix or the bionic codec matrix (the codec matrix is a m x n' high latitude matrix, the a-th row and b-th column).
In step 202, residual similarity is extracted by a multi-scale residual similarity (MRS) collecting module, which is used for collecting characteristic information of the capturing local area, and the multi-scale residual similarity (MRS) collecting module collects context information by adopting a similar volume, and specifically comprises the following steps:
for the ith feature vector Y (i) Through each central pixel P center Adjacent to it, the pixel P in d x d region j Element multiplication between them calculates a similarity value P j ' the formula is:
P j ′=P j ×P center (4)
wherein j represents the coordinates of the d×d region, obtaining a local representation for each pixel in the image-filtered vessel feature image, and then connecting the local representations along the channel dimension to obtain the residual similarity of the ith feature vector
Figure GDA0004137946400000161
Wherein d represents the size of the custom region, H and W' represent the height and width of the feature vector, respectively, and the importance of the pixel points around the center pixel decreases with distance attenuation, so the multi-scale residual similarity collecting module sets d with different values to obtain corresponding residual similarity respectively>
Figure GDA0004137946400000162
Residual similarity is then added->
Figure GDA0004137946400000163
Adding and summing, the final residual similarity of the ith feature vector is +.>
Figure GDA0004137946400000164
Then selecting different d values (d=3, 5, 7) to correspond to the residual similarity +. >
Figure GDA0004137946400000165
(d=3, 5,7, different d values represent multi-scale) summing to obtain the final residual similarity of the ith feature vector of +.>
Figure GDA0004137946400000166
(aforementioned->
Figure GDA0004137946400000167
Is the residual similarity of the input feature vector, < +.>
Figure GDA0004137946400000168
Is the residual similarity of the feature vector of the output).
The pilot codec matrix generated by the pilot network is
Figure GDA0004137946400000169
The guided residual similarity is
Figure GDA00041379464000001610
Bionic coding/decoding matrix generated by bionic network>
Figure GDA00041379464000001611
The similarity of bionic residual errors generated by the bionic network is equal to sum +.>
Figure GDA00041379464000001612
Information migration makes the similarity of the bionic coding matrix and the bionic residual error approach to the similarity of the guiding coding matrix and the guiding residual error respectivelyDegree. The loss function of the information migration task is as follows:
Figure GDA0004137946400000171
wherein W is t To guide network weights, W s Bionic network weight, G i T A coding and decoding matrix representing the ith eigenvector of the guided network, G i S The coding and decoding matrix of the ith feature vector of the bionic network is represented, and n represents the number of the feature vectors; lambda (lambda) i And beta i Representing the weight factor of the corresponding loss term, N represents the number of data points.
The intelligent analysis and prediction module for fundus images based on ordered classification predicts the state change of the segmented blood vessel images, and is worthy of explanation that the embodiment is an experiment performed by taking fundus images of a diabetic patient as samples, and the method is applicable to image processing of all fundus images;
The basic principle of ordered multi-classification is that the ordered multi-classification problems are sequentially divided into a plurality of simple two-classification problems by the existence of an order relation among dependent variables. Among the stage problems of diabetic retinopathy studied in this example, a total of 5 stages were classified according to the severity of the state change. Normal (phase 0), mild NPDR (phase 1), moderate NPDR (phase 2), severe NPDR (phase 3), PDR (phase 4). In sorting with ordered sorting, this problem is split into 4 binary sorting problems, namely (0 vs 1+2+3+4), (0+1 vs 2+3+4), (0+1+2 vs 3+4) and (0+1+2+3 vs 4), respectively.
This embodiment addresses the diabetic retinopathy staging problem, which is a multi-classification problem if solved with traditional convolutional neural networks, by multi-layer convolution, pooling, fully-connected layers followed by a final softmax regression. However, this does not take into account the characteristics of the lesion stage itself, the network structure mainly refers to GoogLeNet, the last layer of the network is set to be ordered classification, and the loss function is as follows:
Figure GDA0004137946400000172
where N represents the number of data points, T represents a binary classification task, gamma t Representing the weight of each of the two classified tasks,
Figure GDA0004137946400000181
Represents the output of the c-th sample relative to the t-th task,>
Figure GDA0004137946400000182
representing a real tag.
Figure GDA0004137946400000183
Representing the weight of each sample in the t-th binary classification task, and subsequently optimizing with the training of the network. w (w) t Representing the parameters of the t-th classifier, xc representing the c-th input vector,/->
Figure GDA0004137946400000184
Representing a probabilistic model. For each classification task we have an output of either 0 or 1. The final classification result is integrated and judged according to the classification result of each classification task. In the task, the classification results of the 4 classification tasks are summed up, and the final stage prediction of the disease can be obtained.
The foreground user interface module displays corresponding results of each stage to a user, and the system after training is completed can realize fundus image intelligent analysis.
As shown in fig. 1, an intelligent analysis method for retinal images based on information migration and ordered classification includes the following steps:
step 1, performing image preprocessing on an acquired fundus image, wherein the purpose is to improve the image quality and highlight useful characteristics;
step 2, performing fundus image blood vessel segmentation based on information migration; performing blood vessel segmentation on the preprocessed fundus image to obtain a blood vessel image;
And 3, predicting the state change of the blood vessel image based on the ordered classification.
The step 1 specifically comprises the following steps:
s101, performing dual-tree complex wavelet and improved top-hat transformation on an acquired original fundus image to obtain a fundus image after image enhancement; step S101, the retinal image quality is effectively improved, the blood vessel contrast is improved, and characteristic elements (blood vessel structures, optic discs and macula lutea) are highlighted;
the dual-tree complex wavelet transform (DT-CWT) is used for overcoming the defect of the conventional discrete wavelet transform, and when the corresponding wavelet basis approximately meets the Hilbert transform relation, the dual-tree complex wavelet transform can reduce the translation sensitivity in the conventional real wavelet transform, improve the direction selectivity, and retain the detail information while effectively improving the image quality; after performing dual-tree complex wavelet transformation on an original fundus image, performing top-hat transformation after improvement on the fundus image after complex wavelet transformation;
the improved top-hat transformation specifically comprises the following steps: performing open operation on the fundus image subjected to the double-tree complex wavelet transformation to obtain an open operation fundus image, wherein when the open operation fundus image is subtracted from the transformed fundus image, data with a gray value transformed are kept unchanged, and unchanged data are subtraction results;
The morphological top-hat transformation is performed by a combination of open and close operations, which are recursively derived from expansion and erosion operations. The traditional top-hat transformation is to subtract the result of the original image after the original image is subjected to the open operation as a final result, but the final result is entirely dark, and some darker edges cannot be displayed. The improved top-hat conversion is that when the fundus image after complex wavelet conversion subtracts the open operation fundus image, the data of gray value conversion is kept unchanged, the unchanged data is the subtraction result, the gray value difference of the image is obviously increased through the improved top-hat conversion, and some edges with small amplitude variation can be effectively protected;
after the dual-tree complex wavelet transformation and the improved top-hat transformation, fundus images with enhanced local detail and improved image quality are obtained;
s102, performing view field extraction based on space brightness information, converting an eye fundus image (a color image, which is subjected to local detail enhancement and image quality improvement through complex wavelet transformation and top-hat transformation) after image enhancement from an RGB format to a YIQ format, setting a segmentation threshold value, extracting a surrounding black view field, obtaining a useful information area through corrosion operation, realizing separation of the eye fundus area and a background area, obtaining a mask image, and multiplying the mask image and the eye fundus image after image enhancement to obtain an eye fundus area image.
The fundus image analysis after image enhancement needs to acquire an ROI (region of interest), namely a fundus region image, so that the influence of pixels outside the ROI region can be effectively avoided in subsequent processing, the operation complexity is reduced, and the ROI and useless regions can be defined by a mask image;
the mask image is commonly referred to as a visual field, and the original fundus image is used to generate a mask image of the same size, so as to separate the fundus region from the background region. In order to accurately extract a mask image (fundus image field of view), the invention provides a field of view extraction method based on spatial brightness information. The view field extraction method based on the space brightness information separates brightness information and chromaticity information on the basis of the image in the YIQ format, and then extracts the surrounding black area through the selection of the segmentation threshold. Unlike the RGB three-color channel, in YIQ format, Y refers to the luminance information of an image, I refers to the color change from orange to cyan, and Q refers to the color change from purple to yellow-green. Because it contains the double information of brightness and color, the brightness component in the image is separated and extracted;
the mask image extraction specifically comprises the following steps:
the first step: converting the fundus image after image enhancement from RGB format to YIQ format:
Figure GDA0004137946400000201
The above formula (1) can obtain 3 components of the YIQ format image;
and a second step of: setting a segmentation threshold value, extracting a surrounding black view field, and obtaining a region of interest through corrosion operation;
the selection of the segmentation threshold is the basis of segmentation extraction, and for a single segmentation threshold, the gray values of the pixels are divided into two classes by the segmentation threshold, one class is that the gray values are larger than the segmentation threshold, and the other class is that the gray values are smaller than the segmentation threshold. The Y component refers to brightness information of an image, a Y component histogram of a fundus image in a YIQ format is in a double-peak state and comprises two parts of high pixels, the pixel values at the two ends are 0, and separation is carried out according to the characteristics of the Y component histogram;
through a series of experiments, in this embodiment, the segmentation threshold is set to 50 to perform extraction and separation of the black region, and a mask image is obtained:
Figure GDA0004137946400000202
wherein, "1" represents a background side block diagram, and "0" represents an eyeball blood vessel; y is the brightness information of the image, which is equal to the gray value of the brightness component of the image (Y component of the fundus image in YIQ format), M (x ', Y) is the extracted background frame, x ', Y represents the pixel coordinates, and if the Y component of the x ' th row and Y column pixels is greater than 50, the point is considered as the background area;
s103, a multiscale linear filter based on a Hessian matrix is used for setting parameters according to different gray values and characteristic values of blood vessels in the fundus region image, eliminating noise after filtering, and further highlighting blood vessel characteristics to obtain a filtered blood vessel characteristic image;
The Hessian matrix realizes enhancement of a specific structure (point structure/line structure) in the image, thereby realizing extraction of target characteristics and removing other useless noise information;
in a two-dimensional image, the Hessian matrix is a two-dimensional positive definite matrix, with two eigenvalues and corresponding eigenvectors. The two eigenvalues represent the anisotropy of the image change in the direction in which the two eigenvectors refer, and if an ellipse is formed by the eigenvector and the eigenvalue, this ellipse is labeled with the anisotropy except for the image change. The dot structure in the image has isotropy, while the linear structure has anisotropy. The linear structure in the image of the eye bottom area, i.e. the vascular structure, is thus enhanced by the Hessian matrix characteristic construction filter, while punctiform structures, i.e. noise points, are filtered out.
Step 2, performing blood vessel segmentation on the blood vessel characteristic image after the preprocessed fundus image is filtered based on information migration, wherein a fundus image blood vessel segmentation network module comprises two segmentation networks: a bootstrap network and a biomimetic network. The information migration refers to migration of knowledge information extracted by the pre-trained guided network model to the bionic network. Step 2 comprises the following steps:
Step 201, using the filtered blood vessel feature image and the segmentation label as common inputs of a guide network and a bionic network, training the guide network to perform a blood vessel segmentation task, performing a segmentation accuracy test on a verification set after every 100 iterations in the process of iterative training of the guide network, saving the weight with the segmentation accuracy larger than a set segmentation threshold value, and selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network after the iterative training is completed (the optimal weight is saved in the process of iterative training, the network performs the test on the verification set after every 100 iterations, and the time with the highest segmentation accuracy is called as the optimal weight);
the whole data set is divided into a training set, a verification set and a test set, and the same data set is used by the guidance network and the bionic network;
step 202, in the process of iterative training of a bionic network, the guiding network loads the optimal weight of the guiding network stored in step 201, generates a guiding encoding and decoding matrix and guiding residual error similarity for the filtered blood vessel feature image, generates a corresponding bionic encoding and decoding matrix and a corresponding bionic similarity matrix for the filtered blood vessel feature image, and fits and partitions the label, the guiding encoding and decoding matrix and the guiding residual error similarity by the bionic network, and uses an L2 loss function as a constraint condition in the fitting process;
Step 203, updating the bionic encoding and decoding matrix, the bionic residual error similarity parameter and the segmentation network parameter of the bionic network in the direction of the reduced loss function value through a counter propagation method, and jumping to step 202 to perform iterative training, performing segmentation accuracy test on the verification set every 100 iterations, storing weights with the precision larger than a set segmentation threshold value, and selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network after the iterative training is completed.
In step 202, the guiding codec matrix and the bionic codec matrix are G e rm×n ', G e rm×n ' (m×n ' matrix, G matrix is generated by two-sided feature graphs) generated by the guiding/bionic encoder and the guiding/bionic decoder, and the output feature graph of the encoder layer is F 1 ∈R h X w x m, where h, w, m are the height, width and channel number of the F1 profile, respectively, (the height, width and channel number of the profile image of the layer of the lead network), the output profile of the decoder layer is F 2 ∈R h X w x n ', wherein h, w, n' each represent F 2 The height, width and channel number of the feature map, the guide encoding and decoding matrix and the bionic encoding and decoding matrix G epsilon Rm multiplied n' are calculated as follows:
Figure GDA0004137946400000221
Where s=1,..h, t=1,..w, x and W represent the weights of the input image and the guidance/biomimetic network (guidance network or biomimetic network), respectively; g a,b (x; W) represents the a-th row and b-th column of the guide codec matrix or the bionic codec matrix (the codec matrix is a m x n' high latitude matrix, the a-th row and b-th column).
In step 202, residual similarity is extracted by a multi-scale residual similarity (MRS) collecting module, which is used for collecting characteristic information of the capturing local area, and the multi-scale residual similarity (MRS) collecting module collects context information by adopting a similar volume, and specifically comprises the following steps:
for the ith feature vector Y (i) Through each central pixel P center Adjacent to it, pixels in d x d regionP j Element multiplication between them calculates a similarity value P j ' the formula is:
P j ′=P j ×P center (4)
wherein j represents the coordinates of the d×d region, obtaining a local representation for each pixel in the image-filtered vessel feature image, and then connecting the local representations along the channel dimension to obtain the residual similarity of the ith input feature vector
Figure GDA0004137946400000231
Wherein d represents the size of the custom region, H and W' represent the height and width of the feature vector, respectively, and the importance of the pixel points around the center pixel decreases with distance decay, so the multi-scale residual similarity collecting module selects different d values (d=3, 5, 7) to be corresponding to residual similarity- >
Figure GDA0004137946400000232
(d=3, 5,7, different d values represent multi-scale) summing to obtain the final residual similarity of the ith feature vector of +.>
Figure GDA0004137946400000233
(aforementioned)
Figure GDA0004137946400000234
Is the residual similarity of the input feature vector, < +.>
Figure GDA0004137946400000235
Is the residual similarity of the feature vector of the output). />
The pilot codec matrix generated by the pilot network is
Figure GDA0004137946400000236
The guided residual similarity is
Figure GDA0004137946400000237
Bionic coding/decoding matrix generated by bionic network>
Figure GDA0004137946400000238
The similarity of bionic residual errors generated by the bionic network is equal to sum +.>
Figure GDA0004137946400000239
And information migration enables the similarity of the bionic coding matrix and the bionic residual to approach the similarity of the guiding coding matrix and the guiding residual respectively. The loss function of the information migration task is as follows:
Figure GDA00041379464000002310
wherein W is t To guide network weights, W s Bionic network weight, G i T A coding and decoding matrix representing the ith eigenvector of the guided network, G i s The coding and decoding matrix of the ith feature vector of the bionic network is represented, and n represents the number of the feature vectors; lambda (lambda) i And beta i Representing the weight factor of the corresponding loss term, N represents the number of data points.
S3, predicting state change of the segmented blood vessel image, and sequentially segmenting the ordered multi-classification problem into a plurality of classification problems;
specifically, for the class M (M represents the class M class for which the state change output needs to be predicted, in the embodiment, m=5) classification problem, when classifying by the ordered classification, the problem is split into M-1 binary classification problems, namely (0 vs 1+2+3+ … +m-1), (0+1 vs 2+3+ … +m-1), …, and (0+1+2+3 … +m-2 vs M-1), respectively. According to the characteristics existing between variables, the ordered classification is introduced into a network structure (the network refers to the ordered classification of the fundus image intelligent analysis is a classification network) of the fundus image intelligent analysis prediction, and the loss function of the ordered classification is as follows:
Figure GDA0004137946400000241
Where N represents the number of data points, T isBinary task of table classification, gamma t Representing the weight of the t-th binary classification task,
Figure GDA0004137946400000242
representing the output of the c-th sample relative to the t-th binary classification task, +.>
Figure GDA0004137946400000243
True tag of the t-th binary classification task representing the c-th sample,/for the c-th binary classification task>
Figure GDA0004137946400000244
The weight representing the c sample in the t binary classification task is optimized as the network is trained. W (W) t Parameters representing the t-th binary classification task classifier (parameters of classifier, specifically including network layer number, filter size), x c Represents the c-th input vector,>
Figure GDA0004137946400000245
representing a probabilistic model. And the final classification result is integrated and judged according to the classification result of each classification task.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or groups of devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or groups of embodiments may be combined into one module or unit or group, and furthermore they may be divided into a plurality of sub-modules or sub-units or groups. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions of the methods and apparatus of the present invention, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the invention in accordance with instructions in said program code stored in the memory.
By way of example, and not limitation, computer readable media comprise computer storage media and communication media. Computer-readable media include computer storage media and communication media. Computer storage media stores information such as computer readable instructions, data structures, program modules, or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (2)

1. The retina image intelligent analysis system based on information migration and ordered classification is characterized by comprising a fundus image preprocessing module, a fundus image blood vessel segmentation network module and a fundus image intelligent analysis prediction module;
the fundus image preprocessing module performs image preprocessing on the acquired fundus image;
the fundus image blood vessel segmentation network module performs blood vessel segmentation on the preprocessed fundus image based on information migration to obtain a blood vessel image, wherein a guiding network and a bionic network are defined in the fundus image blood vessel segmentation network module;
the fundus image intelligent analysis prediction module predicts state change of the blood vessel image based on ordered classification;
the fundus image preprocessing module comprises an image enhancer module, a mask generation sub-module and a multi-scale linear filter sub-module;
the image enhancer module performs dual-tree complex wavelet and improved top-hat conversion on the acquired original fundus image to obtain a fundus image after image enhancement;
the improved top-hat transformation specifically comprises the following steps: performing open operation on the fundus image subjected to the double-tree complex wavelet transformation to obtain an open operation fundus image, wherein when the open operation fundus image is subtracted from the fundus image subjected to the double-tree complex wavelet transformation, data with a transformed gray value are kept unchanged, and unchanged data are subtraction results;
The mask generation submodule performs view field extraction based on space brightness information, converts an eyeground image after image enhancement from an RGB format to a YIQ format, sets a segmentation threshold, extracts a surrounding black view field, obtains a useful information area through corrosion operation, separates the eyeground area from a background area to obtain a mask image, and multiplies the eyeground image after image enhancement by the mask image to obtain an eyeground area image, and specifically comprises the following steps:
the first step: converting the fundus image after image enhancement from RGB format to YIQ format:
Figure QLYQS_1
equation (1) obtains 3 components of a YIQ-format fundus image;
and a second step of: setting a segmentation threshold value, extracting a surrounding black view field, and obtaining a region of interest through corrosion operation;
the mask image is obtained as follows:
Figure QLYQS_2
wherein, "1" represents a background side block diagram, and "0" represents an eyeball blood vessel; y is the brightness information of the image and is equal to the gray value of the brightness component of the image, M (x ', Y) is the extracted background frame, and x', Y represent the pixel coordinates;
the multi-scale linear filter submodule is based on a multi-scale linear filter of a Hessian matrix, parameters are set according to different gray values and characteristic values of blood vessels in fundus area images, noise is eliminated after filtering, and a filtered blood vessel characteristic image is obtained;
The fundus image blood vessel segmentation network module training process comprises the following steps:
step 201, using the filtered blood vessel feature image and the segmentation label as common inputs of a guidance network and a bionic network, training the guidance network to perform a blood vessel segmentation task, performing a segmentation accuracy test on a verification set after every A iterations in the iterative training process of the guidance network, storing weights with segmentation accuracy greater than a set segmentation threshold, and selecting the weight with the highest segmentation accuracy as the optimal weight of the guidance network after the iterative training is completed;
step 202, in the process of iterative training of a bionic network, the guiding network loads the optimal weight of the guiding network stored in step 201, generates a guiding encoding and decoding matrix and guiding residual error similarity for the filtered blood vessel characteristic image, generates a corresponding bionic encoding and decoding matrix and a corresponding bionic similarity matrix for the filtered blood vessel characteristic image, and fits and partitions the label, the guiding encoding and decoding matrix and the guiding residual error similarity by the bionic network, and uses a loss function as a constraint condition in the fitting process;
step 203, updating the bionic coding and decoding matrix, the bionic residual similarity parameter and the segmentation network parameter of the bionic network in the direction of the decreasing loss function value through a counter propagation method, jumping to step 202 to perform iterative training, performing segmentation accuracy test on the verification set every A iterations, storing weights with the accuracy larger than a set segmentation threshold, and selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network after the iterative training is completed;
Step 202, guiding the encoding and decoding matrix and the bionic encoding and decoding matrix to G E R m×n′ ,G∈R m×n′ By generating the characteristic diagrams of the guiding/bionic encoder and the guiding/bionic decoder, the output characteristic diagram of the encoder layer is F 1 ∈R h×w×m Wherein h, w, m are each F 1 The height, width and channel number of the feature map, the output feature map of the decoder layer is F 2 ∈R h×w×n′ Wherein h, w, n' each represent F 2 The feature map has high, wide and channel number, guide encoding and decoding matrix and bionic encoding and decoding matrix G E R m×n′ The calculation is as follows:
Figure QLYQS_3
wherein s=1,..h, t=1,..w, x and W represent the weights of the input image and the guidance/biomimetic network, respectively; g a,b (x; W) represents the a-th row and b-th column of the guided codec matrix or the biomimetic codec matrix;
step 202 specifically includes the following steps:
for the ith feature vector Y (i) Through each central pixel P center Adjacent to it, the pixel P in d x d region j Element multiplication between them calculates a similarity value P j ' the formula is:
P j ′=P j ×P center
in step 202, j represents coordinates of d×d region, local representation is obtained for each pixel in the vessel feature image after image filtering, and then the local representations are connected along the channel dimension to obtain residual similarity of the ith feature vector
Figure QLYQS_4
Wherein d represents the size of the region, H and W' represent the height and width of the feature vector, respectively, and the corresponding residual similarity +. >
Figure QLYQS_5
Selecting different d values, and adding corresponding residual similarity to the D>
Figure QLYQS_6
Adding and summing to obtain the final residual similarity of the ith feature vector as +.>
Figure QLYQS_7
The pilot codec matrix generated by the pilot network is
Figure QLYQS_8
The pilot residual similarity is +.>
Figure QLYQS_9
Bionic coding/decoding matrix generated by bionic network>
Figure QLYQS_10
The similarity of bionic residual errors generated by the bionic network is equal to sum +.>
Figure QLYQS_11
i=1.. n; the loss function of the information migration task is:
Figure QLYQS_12
wherein W is t To guide network weights, W s The weight of the bionic network is calculated,
Figure QLYQS_13
coding matrix representing the ith eigenvector of the guided network,/or->
Figure QLYQS_14
Representing the coding and decoding matrix of the ith feature vector of the bionic network, wherein n represents the number of the feature vectors; lambda (lambda) i And beta i A weight factor representing a corresponding loss term, N representing the number of data points;
the fundus image intelligent analysis prediction module based on ordered classification predicts the state change of the segmented blood vessel image, and sequentially segments the ordered multi-classification problem into a plurality of classification problems;
the loss function of the ordered classification is:
Figure QLYQS_15
where N represents the number of data points, T represents a binary classification task, gamma t Representing the weight of the t-th binary classification task,
Figure QLYQS_16
representing the output of the c-th sample relative to the t-th binary classification task, +. >
Figure QLYQS_17
True tag of the t-th binary classification task representing the c-th sample,/for the c-th binary classification task>
Figure QLYQS_18
Weight, W, representing the c sample in the t-th binary classification task t Parameters representing the t-th binary classification task classifier, x c Represents the c-th input vector,>
Figure QLYQS_19
representing a probabilistic model.
2. An intelligent retinal image analysis method based on information migration and ordered classification is characterized by comprising the following steps:
step 1, performing image preprocessing on an acquired fundus image;
step 2, performing fundus image blood vessel segmentation based on information migration; performing blood vessel segmentation on the preprocessed fundus image to obtain a blood vessel image;
step 3, predicting the state change of the blood vessel image based on the ordered classification;
the step 1 specifically comprises the following steps:
s101, performing dual-tree complex wavelet and improved top-hat transformation on an acquired original fundus image to obtain a fundus image after image enhancement;
the improved top-hat transformation specifically comprises the following steps: performing open operation on the fundus image subjected to the double-tree complex wavelet transformation to obtain an open operation fundus image, wherein when the open operation fundus image is subtracted from the transformed fundus image, data with a gray value transformed are kept unchanged, and unchanged data are subtraction results;
S102, performing view field extraction based on space brightness information, converting an eyeground image after image enhancement from an RGB format to a YIQ format, setting a segmentation threshold value, extracting a surrounding black view field, obtaining a useful information area through corrosion operation, realizing separation of the eyeground area and a background area, obtaining a mask image, and multiplying the mask image and the eyeground image after image enhancement to obtain an eyeground area image;
s103, a multiscale linear filter based on a Hessian matrix is used for setting parameters according to different gray values and characteristic values of blood vessels in the fundus region image, and noise is eliminated after filtering, so that a filtered blood vessel characteristic image is obtained;
the step 2 specifically comprises the following steps:
step 201, using the filtered blood vessel feature image and the segmentation label as common inputs of a guidance network and a bionic network, training the guidance network to perform a blood vessel segmentation task, performing a segmentation accuracy test on a verification set after every A iterations in the iterative training process of the guidance network, storing weights with segmentation accuracy greater than a set segmentation threshold, and selecting the weight with the highest segmentation accuracy as the optimal weight of the guidance network after the iterative training is completed;
step 202, in the process of iterative training of a bionic network, the guiding network loads the optimal weight of the guiding network stored in step 201, generates a guiding encoding and decoding matrix and guiding residual error similarity for the filtered blood vessel characteristic image, generates a corresponding bionic encoding and decoding matrix and a corresponding bionic similarity matrix for the filtered blood vessel characteristic image, and fits and partitions the label, the guiding encoding and decoding matrix and the guiding residual error similarity by the bionic network, and uses a loss function as a constraint condition in the fitting process;
Step 203, updating the bionic coding and decoding matrix, the bionic residual similarity parameter and the segmentation network parameter of the bionic network in the direction of the decreasing loss function value through a counter propagation method, jumping to step 202 to perform iterative training, performing segmentation accuracy test on the verification set every A iterations, storing weights with the accuracy larger than a set segmentation threshold, and selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network after the iterative training is completed;
step 202, guiding the encoding and decoding matrix and the bionic encoding and decoding matrix to G E R m×n′ ,G∈R m×n′ By generating the characteristic diagrams of the guiding/bionic encoder and the guiding/bionic decoder, the output characteristic diagram of the encoder layer is F 1 ∈R h×w×m Wherein h, w, m are each F 1 The height, width and channel number of the feature map, the output feature map of the decoder layer is F 2 ∈R h×w×n′ Wherein h, w, n' each represent F 2 The feature map has high, wide and channel number, guide encoding and decoding matrix and bionic encoding and decoding matrix G E R m×n′ The calculation is as follows:
Figure QLYQS_20
wherein s=1,..h, t=1,..w, x and W represent the weights of the input image and the guidance/biomimetic network, respectively; g a,b (x; W) represents the a-th row and b-th column of the guided codec matrix or the biomimetic codec matrix;
step 202 specifically includes the following steps:
For the ith feature vector Y (i) Through each central pixel P center Adjacent to it, the pixel P in d x d region j Element multiplication between them calculates a similarity value P j ' the formula is:
P j ′=P j ×P center
wherein j represents the coordinates of the d×d region, obtaining a local representation for each pixel in the image-filtered vessel feature image, and then connecting the local representations along the channel dimension to obtain the residual similarity of the ith input feature vector
Figure QLYQS_21
Wherein d represents the size of the custom region, H and W' represent the height and width of the feature vector, respectively, and the corresponding residual similarity is obtained>
Figure QLYQS_22
Residual similarity +.>
Figure QLYQS_23
Adding and summing, the final residual similarity of the ith feature vector is +.>
Figure QLYQS_24
The pilot codec matrix generated by the pilot network is
Figure QLYQS_25
The pilot residual similarity is +.>
Figure QLYQS_26
Bionic coding/decoding matrix generated by bionic network>
Figure QLYQS_27
The similarity of bionic residual errors generated by the bionic network is equal to sum +.>
Figure QLYQS_28
i=1.. n; the loss function of the information migration task is:
Figure QLYQS_29
wherein W is t To guide network weights, W s Bionic network weight, G i T A coding and decoding matrix representing the ith eigenvector of the guided network, G i S The coding and decoding matrix of the ith feature vector of the bionic network is represented, and n represents the number of the feature vectors; lambda (lambda) i And beta i A weight factor representing a corresponding loss term, N representing the number of data points;
S3, predicting state change of the segmented blood vessel image, and sequentially segmenting the ordered multi-classification problem into a plurality of classification problems;
the loss function of the ordered classification is:
Figure QLYQS_30
wherein T represents a binary classification task, gamma t Representing the weight of the t-th binary classification task,
Figure QLYQS_31
representing the output of the c-th sample relative to the t-th binary classification task, +.>
Figure QLYQS_32
True tag of the t-th binary classification task representing the c-th sample,/for the c-th binary classification task>
Figure QLYQS_33
Weight, W, representing the c sample in the t-th binary classification task t Parameters representing the t-th binary classification task classifier, x c Represents the c-th input vector,>
Figure QLYQS_34
and (3) representing a probability model, and carrying out integration judgment on a final classification result according to the classification result of each classification task. />
CN202210367584.0A 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification Active CN114663421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210367584.0A CN114663421B (en) 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210367584.0A CN114663421B (en) 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification

Publications (2)

Publication Number Publication Date
CN114663421A CN114663421A (en) 2022-06-24
CN114663421B true CN114663421B (en) 2023-04-28

Family

ID=82034615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210367584.0A Active CN114663421B (en) 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification

Country Status (1)

Country Link
CN (1) CN114663421B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541908B (en) * 2024-01-10 2024-04-05 华芯程(杭州)科技有限公司 Training method, device and prediction method for optical detection image prediction model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006032261A1 (en) * 2004-09-21 2006-03-30 Imedos Gmbh Method and device for analysing the retinal vessels by means of digital images
CN108986106B (en) * 2017-12-15 2021-04-16 浙江中医药大学 Automatic segmentation method for retinal blood vessels for glaucoma
CN110473188B (en) * 2019-08-08 2022-03-11 福州大学 Fundus image blood vessel segmentation method based on Frangi enhancement and attention mechanism UNet
CN110930418B (en) * 2019-11-27 2022-04-19 江西理工大学 Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network
CN111598894B (en) * 2020-04-17 2021-02-09 哈尔滨工业大学 Retina blood vessel image segmentation system based on global information convolution neural network
CN114283158A (en) * 2021-12-08 2022-04-05 重庆邮电大学 Retinal blood vessel image segmentation method and device and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium

Also Published As

Publication number Publication date
CN114663421A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN110120047B (en) Image segmentation model training method, image segmentation method, device, equipment and medium
Li et al. Dual encoder-based dynamic-channel graph convolutional network with edge enhancement for retinal vessel segmentation
US10499857B1 (en) Medical protocol change in real-time imaging
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
US11263744B2 (en) Saliency mapping by feature reduction and perturbation modeling in medical imaging
WO2019182520A1 (en) Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments
CN109919915A (en) Retina fundus image abnormal region detection method and device based on deep learning
Sadeghibakhi et al. Multiple sclerosis lesions segmentation using attention-based CNNs in FLAIR images
Yan et al. Cine MRI analysis by deep learning of optical flow: Adding the temporal dimension
CN113781488A (en) Tongue picture image segmentation method, apparatus and medium
CN114663421B (en) Retina image analysis system and method based on information migration and ordered classification
CN114943670A (en) Medical image recognition method and device, electronic equipment and storage medium
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
EP4364090A1 (en) Classification and improvement of quality of vascular images
Toptaş et al. Detection of optic disc localization from retinal fundus image using optimized color space
Kipele et al. Poisson noise reduction with nonlocal-pca hybrid model in medical x-ray images
Arulappan et al. Liver tumor segmentation using a new asymmetrical dilated convolutional semantic segmentation network in CT images
Wang et al. An efficient hierarchical optic disc and cup segmentation network combined with multi-task learning and adversarial learning
CN114612484B (en) Retina OCT image segmentation method based on unsupervised learning
KR102476888B1 (en) Artificial diagnostic data processing apparatus and its method in digital pathology images
Yakut et al. A hybrid fusion method combining spatial image filtering with parallel channel network for retinal vessel segmentation
UmaMaheswaran et al. Enhanced non-contrast computed tomography images for early acute stroke detection using machine learning approach
Krishnegowda et al. Modelling on-demand preprocessing framework towards practical approach in clinical analysis of diabetic retinopathy
CN112614092A (en) Spine detection method and device
Sabeena et al. Gd-stfa: gradient descent sea turtle foraging algorithm enabled deep q network for diabetic retinopathy detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant