CN117557840B - Fundus lesion grading method based on small sample learning - Google Patents
Fundus lesion grading method based on small sample learning Download PDFInfo
- Publication number
- CN117557840B CN117557840B CN202311491052.9A CN202311491052A CN117557840B CN 117557840 B CN117557840 B CN 117557840B CN 202311491052 A CN202311491052 A CN 202311491052A CN 117557840 B CN117557840 B CN 117557840B
- Authority
- CN
- China
- Prior art keywords
- network
- fundus
- sample
- class
- meta
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003902 lesion Effects 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 38
- 230000006870 function Effects 0.000 claims description 25
- 239000010410 layer Substances 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000012986 modification Methods 0.000 claims description 9
- 230000004048 modification Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000008447 perception Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000000052 comparative effect Effects 0.000 claims description 5
- 230000002207 retinal effect Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 239000002356 single layer Substances 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 230000008034 disappearance Effects 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 9
- 238000005286 illumination Methods 0.000 abstract description 9
- 201000010099 disease Diseases 0.000 abstract description 8
- 238000007781 pre-processing Methods 0.000 abstract description 6
- 210000001525 retina Anatomy 0.000 abstract description 4
- 230000000875 corresponding effect Effects 0.000 description 15
- 238000002372 labelling Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 206010012689 Diabetic retinopathy Diseases 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000003707 image sharpening Methods 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 208000006069 Corneal Opacity Diseases 0.000 description 1
- 208000010412 Glaucoma Diseases 0.000 description 1
- 206010019280 Heart failures Diseases 0.000 description 1
- 208000032382 Ischaemic stroke Diseases 0.000 description 1
- 208000018737 Parkinson disease Diseases 0.000 description 1
- 208000035977 Rare disease Diseases 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 206010038923 Retinopathy Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 231100000269 corneal opacity Toxicity 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002911 mydriatic effect Effects 0.000 description 1
- 208000010125 myocardial infarction Diseases 0.000 description 1
- 230000004770 neurodegeneration Effects 0.000 description 1
- 208000015122 neurodegenerative disease Diseases 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 208000014733 refractive error Diseases 0.000 description 1
- 238000013517 stratification Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a fundus disease grading method based on small sample learning, which comprises the steps of firstly, collecting and preprocessing retina fundus color photographs to obtain the existing fundus disease data set; respectively learning the intra-class characteristics and inter-class characteristics of each level of fundus lesions by the image data through a contrast network and a meta-network which are subjected to pre-training and meta-training; and finally, carrying out similarity scoring on the unlabeled image and each level fundus lesion prototype so as to carry out grading prediction of fundus lesions. According to the invention, fundus lesions can be classified better by self-learning fundus lesions and fundus illumination with a small number of samples through a dual-network structure comprising a comparison network and a meta-network after meta-training, and the fundus lesions can be classified on the basis of effectively reducing noise influence in both aspects of a feature space and a label space, so that the accuracy of fundus lesions prediction is improved.
Description
Technical Field
The invention relates to a fundus lesion grading method based on image data, in particular to a fundus lesion grading method based on small sample learning, and belongs to the technical field of medical image processing and computer vision.
Background
In recent years, deep learning techniques have been widely used in various fields such as computer vision and the like and have achieved remarkable effects. The high accuracy of this is highly dependent on large scale marking data, which is not always available in real life for large amounts of tagged data, e.g. in the medical field. Taking fundus disease prediction as an example, various diseases such as diabetic retinopathy, glaucoma, transformation of contralateral eyes to neovascular AMD within one year, cardiovascular diseases (ischemic stroke, myocardial infarction, heart failure) and neurodegenerative diseases (Parkinson's disease) and the like can be diagnosed and predicted by fundus illumination. Early prediction results may allow timely intervention, especially for patients with chronic diseases, such as diabetics, and early disease prediction may help prevent serious complications. However, the retinal fundus color photograph is difficult to obtain in various technologies and physiology, so that the problems of pupil size, patient cooperation, corneal opacity, refractive error and the like can lead to incapability of collecting pictures, and the quality of used equipment and lenses is important for obtaining high-quality fundus color photographs. Low quality equipment or lenses may cause distortion and distortion of images, while high quality fundus photographing equipment is generally expensive and has limited medical resources, so that tagged fundus lesion data used as a training sample is relatively less, a traditional deep learning method is difficult to use, and further, the acquired fundus color photographs are difficult to judge and screen diseases, and a great deal of manpower and time are required.
Small sample learning, which may imitate humans using few examples in tasks to identify new classes, is of increasing interest due to the high cost and effort of collecting large amounts of data. The purpose of the small sample learning is to perform quick learning by only a small number of marked data samples, so that the small sample learning has good generalization performance on new tasks. However, most of the existing small sample learning methods are based on the assumption that label information is completely clean and complete, and the robustness of a model facing a noise label is not considered, in fact, the noise label is ubiquitous due to limited knowledge and unintentional damage in medical images, so that the problem of low accuracy of a prediction result exists in fundus lesion prediction classification by utilizing the existing small sample learning methods. Taking grading of a sugar network (short for diabetic retinopathy) as an example, on one hand, grading standards are easy to be confused when grading samples with the same judging standard and different grades, for example, in light NPDR and medium NPDR, the judgment can be made by virtue of microangioma, but the difference is that the degree is light and heavy, so that the fundus color photograph angle is easy to identify errors for samples like transition from light NPDR to medium NPDR; on the other hand, because the fundus color photograph of the patient possibly contains unfamiliar or rare disease features of the expert, and reasons such as photographing angles, data transmission, damage and the like of the fundus color photograph exist, the expert often generates deviation in grading labeling, and further the noise data of labeling samples are generated, and the existence of noise often causes large model deviation, so that a prediction result is seriously influenced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a fundus lesion grading method based on small sample learning, which can realize fundus lesion grading on the basis of effectively reducing noise influence in both a characteristic space and a label space, thereby improving the accuracy of fundus lesion prediction.
In order to achieve the purpose, the fundus lesion grading method based on small sample learning specifically comprises the following steps:
step1, collecting retinal fundus color photographs from a data source, deriving pictures after the data acquisition is completed, and marking all the pictures by a professional doctor, and grading according to international clinical grading standards of fundus lesions;
step2, selecting data set after processing the eye bottom color photograph to improve image quality, reduce noise and normalize image, and selecting K samples for each level as support set Query set/>Wherein y i∈Cnovel, K is the number of samples extracted for each fundus lesion level, M is the number of samples of query set Q;
at the same time, an auxiliary data set with rich samples and accurate labels is defined Where y i∈Cbase, requiring C base∩Cnovel =Φ, also divides the auxiliary dataset into a support set S b and a query set Q b for meta-training;
Step3, constructing a comparison network for generating intra-class weights, and learning intra-class characteristics of each level of fundus lesions after pre-training and meta-training;
Step4, constructing a meta-network for generating inter-class weights, and learning inter-class characteristics of each level of fundus lesions after pre-training and meta-training;
Step5, correcting the eye fundus lesion prototype by using the sample weights extracted by the comparison network and the meta-network;
Step6, carrying out similarity scoring on the unlabeled images and each grade fundus lesion prototype so as to carry out grading prediction of fundus lesions.
Further, step3 specifically comprises the following steps:
Step3-1, pre-training a comparison network g ξ, and mapping the sample vector into a certain feature space through a feature extraction network;
Step3-2, for K samples under each category corresponding to the medical data set D b with wider sources in the meta-training stage and K samples classified for each fundus lesion in the query stage, the similarity between every two samples is calculated according to cosine similarity, and the calculation formula is as follows:
cor(xa,xb)=cos(gξ(xa),gξ(xb))
Wherein: g ξ represents a pre-trained comparative network feature extractor, x a,xb represents two support samples, respectively;
Obtaining K multiplied by K correlation matrixes Z n∈RK×K for the classification n, wherein each correlation matrix Z n contains correlation information between a certain support sample and the rest K-1 samples of the same level, and the information is simultaneously dispersed on other K-1 correlation characteristics;
Step3-3, input the correlation matrix Z n directly into the transducer layer:
Wherein: phi T is a transducer layer parameter, Z n represents a correlation matrix, and o n represents an output result;
Then, using the K-sample softmax function, the intra-class weight vector V n corresponding to each sample is calculated:
Vn=ρ(on)
A vector is generated for each sample that characterizes the weights within the class.
Further, step3-1 is specifically as follows:
step3-1-1, feature extractor pre-training: given feature extractor f θ (·), the prototype for each level k is calculated as
Wherein: The number of the support samples with the fundus lesion level k; x t denotes a support sample; y t represents a fundus lesion level label corresponding to sample x t;
given a new sample x q of the query set, the classifier outputs a normalized classification score for each class k
Wherein: sim w (·) is a similarity function; f θ (·) is the feature extractor, x q is the query sample, c k is the class k prototype;
sim w (·) was calculated using the following manner
simw(A,B)=λcos(Fw(A),Fw(B))
Wherein: f w (-) is a w-parameterized single-layer neural network; lambda is the inverse temperature parameter;
the θ and ω are updated by the following loss function
Wherein: x q,yq are all derived from the query sample; p θ,ω is the normalized score for the corresponding class; e represents a mathematical expectation;
step3-1-2, contrast web pretraining:
The contrast network g ξ is trained by using conditional loss and contrast loss, given an input fundus color photograph x, two images x' and x″ processed by different data enhancement methods are generated, the two enhanced images are sequentially put into the contrast feature extraction network g ξ and a projection multi-layer perception machine head sigma, the sigma is used for embedding original features extracted from the contrast network g ξ for further projection and transformation, and finally, a result mapped after the sigma is put into a prediction multi-layer perception machine head delta, so that the negative cosine similarity of two embedded vectors is obtained:
Wherein: i, 2 represents the l2 norm; stop-gradient is stop gradient operation;
meanwhile, the condition loss is utilized to guide the study of the comparison network, and the comparison network guides the study by the feature extractor f θ ():
Combining the comparison loss with the conditional loss to obtain an objective function of the comparison network optimization, wherein the objective function is as follows:
wherein: gamma is a normal number, balancing the importance of contrast loss and conditional loss.
Further, step4 specifically comprises the following steps:
step4-1, initializing a meta learning network For extracting support set features;
Step4-2, firstly, for all support samples, carrying out K-Means clustering according to a class prototype of the support sample as an initial cluster center until convergence;
Step4-3, calculating cosine similarity between the final clustering center and all samples to obtain a similarity matrix;
Step4-4, using a softmax function, obtaining an inter-class weight of a certain support sample x t for grading a certain fundus lesion as follows:
Wherein: Representing a similarity score for sample x t for fundus lesion level n; τ is a superparameter to prevent gradient disappearance; k represents the number of samples corresponding to each category.
Further, step5 specifically comprises the following steps:
Step5-1, prototype correction in comparison network is:
Wherein: α+β=1; Is an inter-class weight; /(I) Is an intra-class weight; g ξ denotes a contrast network feature extractor; x t represents the t-th sample; /(I)A kth sample representing a class n;
Step5-2, prototype modification in the meta-network is:
Wherein: α+β=1; Is an inter-class weight; /(I) Is an intra-class weight; /(I)Representing a meta-network feature extractor; x t represents the t-th sample,/>Represents the kth sample of class n.
Further, step6 specifically comprises the following steps:
step6-1, define the EC similarity score as follows:
Wherein: x represents a query sample; c n represents the sample center point of the fundus lesion level; g ξ denotes a contrast network feature extractor; Representing a meta-network feature extractor; p n represents a prototype corrected by the comparison network; p' n represents the prototype after the meta-network modification;
step6-2, calculating EC similarity scores corresponding to various levels of fundus lesions according to a certain query sample x, and obtaining probabilities corresponding to the fundus lesion levels by using a softmax function:
Wherein: s EC denotes EC similarity score; Representing a query sample; c n represents the sample center point of the fundus lesion level; t is a superparameter.
Compared with the prior art, the fundus lesion grading method based on small sample learning has the following advantages:
1. In the fundus disease prediction process, the acquisition difficulty of a data set is high, so that the number of samples trained by an artificial neural network is limited and is insufficient to train a traditional neural network.
2. Because of the problems of high difficulty in obtaining high-quality retina fundus color photographs, unknown diseases affecting the identification of fundus lesions, and the like, the grading labeling of the fundus lesions by an expert may deviate. The invention eliminates unreasonably marked noise images as far as possible by constructing a double-network structure and tries to classify the fundus lesions correctly. The contrast network and the meta network respectively correct the fundus lesion prototype of each level through the intra-class weight and the inter-class weight, so that the characteristic and the label representation capability of the model are improved. Metric level calibration can mitigate the effects of two network mispredictions, improving the stability and robustness of the model.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a block diagram of the data preprocessing of the present invention;
FIG. 3 is an overall framework of a dual network architecture in accordance with the present invention;
FIG. 4 is a schematic diagram of the generation of intra-class weights and inter-class weights in a dual network architecture of the present invention;
FIG. 5 is a schematic diagram of an original modification strategy in a dual network architecture of the present invention;
Fig. 6 is a schematic diagram of the comparative network pre-training phase of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings, taking the grading of the lesions of the sugar network as an example.
As shown in fig. 1, firstly, collecting and preprocessing retina fundus color photographs, on one hand, enabling images to adapt to network input, and on the other hand, facilitating subsequent training processes to reduce calculated amount and speed up training, and obtaining an existing sugar network lesion data set; respectively learning the intra-class characteristics and inter-class characteristics of each level of sugar network lesions by the image data through a pre-trained and meta-trained comparison network and a meta-network, and reducing the influence of noise fundus color illumination with labeling deviation; and finally, carrying out similarity scoring on the unlabeled images and each level of sugar net prototype so as to carry out grading prediction of the sugar net. The method comprises the following steps:
step1, data acquisition and labeling
Retinal fundus color illumination is collected from data sources such as diabetics and healthy subjects. The data collection may be performed using sensors, medical devices, or database queries, among other means. The pictures are derived after the data acquisition is completed, and marked by a professional doctor, and the pictures are classified into no obvious retinopathy (grade I), mild NPDR (grade II), moderate NPDR (grade III), severe NPDR (grade IV) and PDR (grade V) according to the new international clinical grading standard of diabetic retinopathy. The specific grading criteria and mydriatic fundus examination are shown in Table 1 below.
Table 1 diabetic retinopathy grading criteria
Step2, data preprocessing
The method is characterized in that data preprocessing is performed before the analysis of the eye fundus color illumination, so that the image quality is improved, noise is reduced, and the image is standardized, and the subsequent analysis is facilitated. As shown in fig. 2, the data preprocessing employs the following steps:
step2-1, image sharpening: by applying the image sharpening filter, edges and details of the image can be enhanced, improving the sharpness of the image. Common sharpening filters include Sobel, laplacian and Gao Sirui.
Step2-2, contrast enhancement: increasing the contrast of the image may make the lesions more visible. The contrast enhancement method includes histogram equalization and contrast stretching.
Step2-3, denoising: fundus illumination may be subject to various noise, such as light noise, artifacts, and the like. Denoising methods include median filtering, gaussian filtering, and wavelet denoising.
Step2-4, color normalization: the color and brightness of fundus images may be different depending on different photographing conditions, and thus color normalization is required to keep the color and brightness uniform between different images.
Step2-5, removing fundus reflection: the optic disc (central portion of the fundus) in fundus illumination typically introduces large changes in luminance, sometimes requiring removal or alleviation of the effects of this area in order to better analyze other portions of the retina.
Step2-6, image cropping: the image may be cropped to preserve only the region of interest (ROI) as needed for a particular task, thereby reducing the complexity of the process.
Step2-7, standardization of image scale: the images are adjusted to the same size for training and analysis of the deep learning model.
Step2-8, selecting a dataset: selecting K samples per level for the sugar network grading five-level standard as a support set, namelyQuery set/>Where y i∈Cnovel, K is the number of samples extracted for each grade of sugar network lesion and M is the number of samples of query set Q. Each FSL problem with the support set estimating sample categories in the query set can be seen as a task. At the same time, an auxiliary dataset/>, with rich samples and accurate labeling, is definedWhere y i∈Cbase, which may be a dataset derived from other fundus medical tasks, requires C base∩Cnovel = phi, again dividing the auxiliary dataset into a support set and a query set: s b and Q b, which are used for meta-training. This strategy can be seen as a training exercise of the model on the data of base class C base, allowing the model to be well generalized to C novel.
Step3, constructing a comparison network
In order to cope with the interference of noise information on model training effect under small sample learning, a dual-network architecture is constructed, which consists of a comparison network and a meta-network, wherein the comparison network is used for generating weights in classes, and the meta-network is used for generating weights between classes. A specific architecture is shown in fig. 3.
Step3-1, pre-training the comparison network g ξ (see Step 7 for details), maps the sample vectors into a feature space through the feature extraction network.
Step3-2, for K samples under each category corresponding to the medical data set D b with wider sources in the meta-training stage and K samples classified by each sugar net in the query stage, the similarity between every two samples is calculated according to cosine similarity, and the calculation formula is as follows:
cor(xa,xb)=cos(gξ(xa),gξ(xb)).
Wherein: g ξ represents a pre-trained comparative network feature extractor, x a,xb represents two support samples, respectively;
Thus, a K x K correlation matrix Z n∈RK×K, e.g., Z 1, may be obtained for classification n, revealing the overall correlation between the class I samples with the sugar network classified as class I. Modeling can be performed with potential correlation between correlated samples of the same level. Each correlation matrix Z n contains the correlation information between a certain supporting sample and the remaining K-1 samples of the same level, which are simultaneously scattered on other K-1 correlation features. The nature of the interconnect makes it more reasonable to fully consider the context of the relevant features of the same class when generating the intra-class weights.
Step3-3, the self-attention mechanism in the transducer model utilizes support samples to assign weights based on the similarity between them. Specifically, a correlation matrix between samples is obtained after Step3-2, and the correlation matrix Z n is directly input into the transducer layer (without position coding):
Wherein: phi T is a transducer layer parameter, Z n represents a correlation matrix, and o n represents an output result;
then, using the K-sample softmax function, the intra-class weight vector V n corresponding to each sample can be calculated:
Vn=ρ(on)
After steps 3-1, step3-2, step3-3, a vector representing the weights within the class can be generated for each sample. The overall process of specifically generating such internal weights is shown in fig. 4.
Step4, constructing a meta-network
Step4-1, initializing a meta learning networkFor extracting support set features.
Step4-2, firstly, for all the support samples, carrying out K-Means clustering by taking a class prototype of the support sample as an initial cluster center according to the fact that the cluster is five, until convergence. Class prototypes are representative representations generated based on support samples, and the center or average value of each class may be taken as the class prototype in the present invention.
Step4-3, calculating the cosine similarity between the final clustering center and all samples, and consistent with Step 4-2. Five cluster centers can be obtained through Step4-1, cosine similarity between each cluster center and all samples is calculated, and finally a similarity matrix of S epsilon R 25×K can be obtained.
Step4-4, using the softmax function, yields a certain support sample x t (t=1, 2, 3..25K), with an inter-class weight for a certain sugar network classification of:
Wherein: representing the similarity score of sample x t for bin n; τ is a superparameter to prevent gradient disappearance; k represents the number of samples corresponding to each category.
Step5, prototype modification
A frame diagram of the prototype modification strategy is shown in fig. 5.
Step5-1, prototype correction in comparison network is:
Wherein: α+β=1; the inter-class weight is obtained from step 3-3; /(I) Is an intra-class weight obtained from step 4-4; g ξ denotes a contrast network feature extractor; x t represents the t-th sample; /(I)Represents the kth sample of class n.
Step5-2, prototype modification in the meta-network is:
Wherein: α+β=1; the inter-class weight is obtained from step 3-3; /(I) Is an intra-class weight obtained from step 4-4; /(I)Representing a meta-network feature extractor; x t represents the t-th sample,/>Represents the kth sample of class n.
Step6, similarity score
To mitigate the effects of two network mispredictions, we introduced metric intelligent calibration on both networks, using a method called EC similarity to help calculate the predicted scores for each level of sugar network. As shown in fig. 6, the calculation of a specific EC similarity score includes the following steps:
step6-1, define the EC similarity score as follows:
wherein: x represents a query sample; c n represents the sample center point of the sugar network grade; g ξ denotes a contrast network feature extractor; Representing a meta-network feature extractor; p n represents a prototype corrected by the comparison network; p' n represents the prototype after the meta-network modification.
That is, for each level of the sugar network disorder, the incoming query sample x (i.e., retinal fundus color illumination) passes through the comparison network g ξ and the meta-networkAfter mapping of (a), the corresponding EC score can be calculated.
Step6-2, calculating EC similarity scores corresponding to the I/II/III/IV/V grade sugar net lesions according to a certain query sample x, and obtaining probabilities corresponding to the sugar net lesion grades by using a softmax function:
Wherein: s EC denotes EC similarity score; Representing a query sample; c n represents the sample center point of the fundus lesion level; t is a superparameter.
Step7, model Pre-training
Step7-1, feature extractor pre-training: a prototype for each class may be computed by a feature extractor. Given feature extractor f θ (·), the prototype for each level k can be calculated as
Wherein: The number of the support samples with the sugar net level of k; f θ is implemented by Conv-4-64 backbone network, x t denotes support samples; y t represents the label corresponding to sample x t, i.e., the sugar network grade of sample x t;
given a new sample x q of the query set, the classifier outputs a normalized classification score for each class k
Wherein: sim w (·) is a similarity function; f θ (·) is the feature extractor, x q is the query sample, and c k is the class k prototype.
Sim w (·) was calculated using the following manner
simw(A,B)=λcos(Fw(A),Fw(B))
Wherein: f w (-) is a w-parameterized single-layer neural network, the output dimension is 2048, and λ is the inverse temperature parameter.
The θ and ω are updated by the following loss function
Wherein: x q,yq is derived from a query sample, p θ,ω is the normalized score of the corresponding category, and the calculation method is as above; e represents a mathematical expectation.
Step7-2, comparative web pre-training:
The comparison network g ξ can be trained with conditional and comparison losses. Given an input fundus illumination x, two images x' and x″ processed by different data enhancement methods are generated, and the two enhanced images are sequentially placed into a contrast feature extraction network g ξ and a projection multi-layer perception camera head sigma for further projection and transformation by embedding the original features extracted from the feature extractor (i.e., the contrast feature extraction network g ξ). Finally, the result mapped after sigma is put into a predictive multi-layer perception machine head delta, and then the sine and cosine similarity of two embedded vectors can be obtained:
Wherein: i, 2 represents the l2 norm; stop-gradient is a stop gradient operation that is commonly used in the construction of certain loss functions, where certain parameters should not be updated according to the gradient, but should remain unchanged.
Meanwhile, the condition loss is used to guide the learning of the comparison network, and the comparison network guides the learning by the feature extractor f θ () (see Step 7-1):
Combining the comparison loss with the conditional loss to obtain an objective function of the comparison network optimization, wherein the objective function is as follows:
wherein: gamma is a normal number, balancing the importance of contrast loss and conditional loss.
The F θ is realized by Conv-4-64 backbone network, and the projection multi-layer perception machine head sigma is composed of a single-layer neural network F w (), and a multi-layer neural network (each layer of neural network has {1600,2048,2048} units and is subjected to batch normalization treatment). The predictive multi-layer perceptron head delta is parameterized by a three-layer neural network having 512 hidden units and batch normalized at the hidden layers, all using the ReLU function as the activation function.
Step8, model element training
The training set used in the meta-training phase is derived from the broader medical data set D b, which requires that no sample of the used set of sugar net lesions queries be included, i.e., that C base∩Cnovel=φ,Db contain a number of categories of N. Step1 through Step7 show how the model identifies the mesh classification (i.e., in the case of n=5), consistent steps for the more widely sourced medical dataset D b.
Step8-1, define meta-loss:
Wherein: m is the total number of samples of the query set,
Noise was artificially created on dataset D b, defining the intra-class noise penalty on dataset D b as follows:
Wherein when Time/>, for artificially introduced noiseOtherwise, 0.
For all support set samples, constructing a similarity matrix M, calculating the similarity between any two samples, and defining the inter-class loss as:
wherein,
L (x (i)) represents the truth label of x (i).
The final loss function is defined as:
Ltotal=Lme+ηLra+γLer
wherein: η and γ are normal numbers representing the importance of the different losses.
Step9, diabetes stratification
An existing tagged sugar net grading picture (support set S n derived from C novel) is put into the network and will learn this new sugar net dataset itself for the network model that has been trained in step 8.
Finally, when grading prediction is carried out on single fundus color photographs, the EC similarity is used for measuring the score of each grade, and the grade with the highest score is obtained as a final prediction label:
the probability of each level of sugar net symptom can be calculated using the following formula:
The label distribution of the symptoms of the sugar network of each grade is calculated.
The method can be implemented by using a hardware platform such as a computer, a server, a mobile device and the like, wherein data processing and model training are involved. The method may also incorporate real-time monitoring equipment and a patient database to continuously track and update the grading results. Contrast network in denoising networkAnd meta-network g ξ, can be implemented by ConvNet model (C64E) backbone. In C64E, each block consists of 64-channel 3×3 convolution, batch normalization, reLU nonlinearity, and 2×2 max pooling. The feature embedding dimension is set to 1600.
The fundus lesion grading method based on small sample learning removes the data with marking deviation in fundus color photographs through a double-network structure formed by a comparison network and a meta-network, and performs fundus lesion grading prediction on fundus color photographs in query concentration on the basis. The dual network architecture consisting of the comparison network and the meta network is calibrated in two ways to eliminate data noise: example level calibration and metric level calibration. In the aspect of example-level calibration, the prototype is corrected by utilizing the sample weights extracted by two networks, so that the relation between sugar net samples of each level and the whole sugar net samples of different levels can be better captured, the distinguishing capability of the model on the samples of different levels is improved, the corrected prototype can more accurately represent the characteristic and label information of each sugar net level, the performance of the model in the classification learning task of the sugar net with fewer samples is improved, and the characteristic representation capability and label representation capability of the model are improved. In terms of metric level calibration, ensemble with Consistency (EC) principles are introduced, similarity between two examples is calculated by fusing similarity evaluation results in two different networks, EC similarity can adjust confidence of similarity prediction according to consistency of the similarity in the two networks and scale similarity prediction scores, similarity between two sugar net samples can be evaluated more accurately by using EC similarity, specifically, the similarity score of each sugar net level can be scaled implicitly, the prediction scores can be calibrated adaptively by calculating consistency of the similarity scores of the two network predictions, and the metric level calibration can alleviate influence of two network mispredictions and improve stability and robustness of a model.
The artificial neural network comprises a pre-training module, adopts a self-supervision learning method, utilizes supervision information in marked data, shapes and improves self-supervision learning feature manifolds under the condition of no auxiliary unmarked data, reduces characterization deviation, and mines more effective semantic information. When the comparison network is pre-trained, the feature extractor is firstly learned on the labeling data through the original supervised learning method, and the prototype of each category is calculated, namely, the prototype of each sugar network level is calculated. Next, a conditional self-monitoring model is trained using the self-monitoring module and the condition module. The self-supervision module generates two different enhancement views by using a random enhancement method, and calculates the similarity loss between the embedded vectors, namely the similarity loss between the sugar net lesion images of different visual angles of the same patient. The condition module uses the features learned in the pre-training stage as priori knowledge, namely uses the prototype representation of each learned sugar net level as guidance, optimizes the feature manifold learned by the self-supervision module, and enables the model to extract more semantic information in combination with the multiparty view angle in the comparison network so as to obtain better representation.
Claims (2)
1. The fundus lesion grading method based on small sample learning is characterized by comprising the following steps of:
step1, collecting retinal fundus color photographs from a data source, deriving pictures after the data acquisition is completed, and marking all the pictures by a professional doctor, and grading according to international clinical grading standards of fundus lesions;
step2, selecting data set after processing the eye bottom color photograph to improve image quality, reduce noise and normalize image, and selecting K samples for each level as support set Query set/>Wherein y i∈Cnovel, K is the number of samples extracted for each fundus lesion level, M is the number of samples of query set Q;
at the same time, an auxiliary data set with rich samples and accurate labels is defined Where y i∈Cbase, requiring C base∩Cnovel =Φ, also divides the auxiliary dataset into a support set S b and a query set Q b for meta-training;
step3, constructing a comparison network for generating intra-class weights, and learning intra-class characteristics of each level of fundus lesions after pre-training and meta-training, wherein the specific process is as follows:
Step3-1, pre-training a comparison network g ξ, and mapping the sample vector into a certain feature space through a feature extraction network;
Step3-2, for K samples under each category corresponding to the medical data set D b with wider sources in the meta-training stage and K samples classified for each fundus lesion in the query stage, the similarity between every two samples is calculated according to cosine similarity, and the calculation formula is as follows:
cor(xa,xb)=cos(gξ(xa),gξ(xb))
Wherein: g ξ represents a pre-trained comparative network feature extractor, x a,xb represents two support samples, respectively;
Obtaining K multiplied by K correlation matrixes Z n∈RK×K for the classification n, wherein each correlation matrix Z n contains correlation information between a certain support sample and the rest K-1 samples of the same level, and the information is simultaneously dispersed on other K-1 correlation characteristics;
Step3-3, input the correlation matrix Z n directly into the transducer layer:
Wherein: phi T is a transducer layer parameter, Z n represents a correlation matrix, and o n represents an output result;
Then, using the K-sample softmax function, the intra-class weight vector V n corresponding to each sample is calculated:
Vn=ρ(on)
Generating a vector representing weights within the class for each sample;
Step4, constructing a meta-network for generating inter-class weights, and learning inter-class characteristics of each level of fundus lesions after pre-training and meta-training, wherein the specific process is as follows:
step4-1, initializing a meta learning network For extracting support set features;
Step4-2, firstly, for all support samples, carrying out K-Means clustering according to a class prototype of the support sample as an initial cluster center until convergence;
Step4-3, calculating cosine similarity between the final clustering center and all samples to obtain a similarity matrix;
Step4-4, using a softmax function, obtaining an inter-class weight of a certain support sample x t for grading a certain fundus lesion as follows:
Wherein: Representing a similarity score for sample x t for fundus lesion level n; τ is a superparameter to prevent gradient disappearance; k represents the number of samples corresponding to each category;
Step5, correcting the eye fundus lesion prototype by using sample weights extracted from the comparison network and the meta-network, wherein the specific process is as follows:
Step5-1, prototype correction in comparison network is:
Wherein: α+β=1; Is an inter-class weight; /(I) Is an intra-class weight; g ξ denotes a contrast network feature extractor; x t represents the t-th sample; /(I)A kth sample representing a class n;
Step5-2, prototype modification in the meta-network is:
Wherein: α+β=1; Is an inter-class weight; /(I) Is an intra-class weight; /(I)Representing a meta-network feature extractor; x t represents the t-th sample,/>A kth sample representing a class n;
step6, carrying out similarity scoring on the unlabeled image and each level fundus lesion prototype to carry out grading prediction on fundus lesions, wherein the specific process is as follows:
step6-1, define the EC similarity score as follows:
Wherein: x represents a query sample; c n represents the sample center point of the fundus lesion level; g ξ denotes a contrast network feature extractor; Representing a meta-network feature extractor; p n represents a prototype corrected by the comparison network; p' n represents the prototype after the meta-network modification;
step6-2, calculating EC similarity scores corresponding to various levels of fundus lesions according to a certain query sample x, and obtaining probabilities corresponding to the fundus lesion levels by using a softmax function:
Wherein: s EC denotes EC similarity score; Representing a query sample; c n represents the sample center point of the fundus lesion level; t is a superparameter.
2. The fundus lesion classifying method based on small sample learning according to claim 1, wherein Step3-1 is specifically as follows:
step3-1-1, feature extractor pre-training: given feature extractor f θ (·), the prototype for each level k is calculated as
Wherein: The number of the support samples with the fundus lesion level k; x t denotes a support sample; y t represents a fundus lesion level label corresponding to sample x t;
given a new sample x q of the query set, the classifier outputs a normalized classification score for each class k
Wherein: sim w (·) is a similarity function; f θ (·) is the feature extractor, x q is the query sample, c k is the class k prototype;
sim w (·) was calculated using the following manner
simw(A,B)=λcos(Fw(A),Fw(B))
Wherein: f w (-) is a w-parameterized single-layer neural network; lambda is the inverse temperature parameter;
the θ and ω are updated by the following loss function
Wherein: x q,yq are all derived from the query sample; p θ,ω is the normalized score for the corresponding class; e represents a mathematical expectation;
step3-1-2, contrast web pretraining:
The contrast network g ξ is trained by using conditional loss and contrast loss, given an input fundus color photograph x, two images x' and x″ processed by different data enhancement methods are generated, the two enhanced images are sequentially put into the contrast feature extraction network g ξ and a projection multi-layer perception machine head sigma, the sigma is used for embedding original features extracted from the contrast network g ξ for further projection and transformation, and finally, a result mapped after the sigma is put into a prediction multi-layer perception machine head delta, so that the negative cosine similarity of two embedded vectors is obtained:
Wherein: i, 2 represents the l2 norm; stop-gradient is stop gradient operation;
meanwhile, the condition loss is utilized to guide the study of the comparison network, and the comparison network guides the study by the feature extractor f θ ():
Combining the comparison loss with the conditional loss to obtain an objective function of the comparison network optimization, wherein the objective function is as follows:
wherein: gamma is a normal number, balancing the importance of contrast loss and conditional loss.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311491052.9A CN117557840B (en) | 2023-11-10 | 2023-11-10 | Fundus lesion grading method based on small sample learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311491052.9A CN117557840B (en) | 2023-11-10 | 2023-11-10 | Fundus lesion grading method based on small sample learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117557840A CN117557840A (en) | 2024-02-13 |
CN117557840B true CN117557840B (en) | 2024-05-24 |
Family
ID=89817778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311491052.9A Active CN117557840B (en) | 2023-11-10 | 2023-11-10 | Fundus lesion grading method based on small sample learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117557840B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117935030A (en) * | 2024-03-22 | 2024-04-26 | 广东工业大学 | Multi-label confidence calibration method and system for double-view-angle correlation perception regularization |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615051A (en) * | 2018-04-13 | 2018-10-02 | 博众精工科技股份有限公司 | Diabetic retina image classification method based on deep learning and system |
CN110969191A (en) * | 2019-11-07 | 2020-04-07 | 吉林大学 | Glaucoma prevalence probability prediction method based on similarity maintenance metric learning method |
CN111639679A (en) * | 2020-05-09 | 2020-09-08 | 西北工业大学 | Small sample learning method based on multi-scale metric learning |
CN111858991A (en) * | 2020-08-06 | 2020-10-30 | 南京大学 | Small sample learning algorithm based on covariance measurement |
AU2020103938A4 (en) * | 2020-12-07 | 2021-02-11 | Capital Medical University | A classification method of diabetic retinopathy grade based on deep learning |
CN113361612A (en) * | 2021-06-11 | 2021-09-07 | 浙江工业大学 | Magnetocardiogram classification method based on deep learning |
CN113537305A (en) * | 2021-06-29 | 2021-10-22 | 复旦大学 | Image classification method based on matching network less-sample learning |
EP3944185A1 (en) * | 2020-07-23 | 2022-01-26 | INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência | Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images |
CN114022766A (en) * | 2021-11-04 | 2022-02-08 | 江苏农林职业技术学院 | Tea typical disease image recognition system and method based on small sample learning |
CN114283355A (en) * | 2021-12-06 | 2022-04-05 | 重庆邮电大学 | Multi-target endangered animal tracking method based on small sample learning |
CN114494195A (en) * | 2022-01-26 | 2022-05-13 | 南通大学 | Small sample attention mechanism parallel twinning method for fundus image classification |
CN114898158A (en) * | 2022-05-24 | 2022-08-12 | 杭州电子科技大学 | Small sample traffic abnormity image acquisition method and system based on multi-scale attention coupling mechanism |
CN115019089A (en) * | 2022-05-30 | 2022-09-06 | 中科苏州智能计算技术研究院 | Double-current convolutional neural network for small sample learning |
CN115170868A (en) * | 2022-06-17 | 2022-10-11 | 湖南大学 | Clustering-based small sample image classification two-stage meta-learning method |
CN115359294A (en) * | 2022-08-23 | 2022-11-18 | 上海交通大学 | Cross-granularity small sample learning method based on similarity regularization intra-class mining |
CN115458174A (en) * | 2022-09-20 | 2022-12-09 | 吉林大学 | Method for constructing intelligent diagnosis model of diabetic retinopathy |
CN115731411A (en) * | 2022-10-27 | 2023-03-03 | 西北工业大学 | Small sample image classification method based on prototype generation |
CN115910385A (en) * | 2022-11-28 | 2023-04-04 | 中科院成都信息技术股份有限公司 | Pathological degree prediction method, system, medium, equipment and terminal |
WO2023056681A1 (en) * | 2021-10-09 | 2023-04-13 | 北京鹰瞳科技发展股份有限公司 | Method for training multi-disease referral system, multi-disease referral system and method |
CN116503668A (en) * | 2023-05-18 | 2023-07-28 | 西安交通大学 | Medical image classification method based on small sample element learning |
CN116529762A (en) * | 2020-10-23 | 2023-08-01 | 基因泰克公司 | Multimodal map atrophic lesion segmentation |
CN116525075A (en) * | 2023-04-27 | 2023-08-01 | 四川师范大学 | Thyroid nodule computer-aided diagnosis method and system based on few sample learning |
CN116612335A (en) * | 2023-07-18 | 2023-08-18 | 贵州大学 | Few-sample fine-granularity image classification method based on contrast learning |
CN116824212A (en) * | 2023-05-11 | 2023-09-29 | 杭州聚秀科技有限公司 | Fundus photo classification method based on small sample learning |
CN116883157A (en) * | 2023-09-07 | 2023-10-13 | 南京大数据集团有限公司 | Small sample credit assessment method and system based on metric learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023547402A (en) * | 2020-10-23 | 2023-11-10 | ジェネンテック, インコーポレイテッド | Multimodal Geographic Atrophy Lesion Segmentation |
-
2023
- 2023-11-10 CN CN202311491052.9A patent/CN117557840B/en active Active
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615051A (en) * | 2018-04-13 | 2018-10-02 | 博众精工科技股份有限公司 | Diabetic retina image classification method based on deep learning and system |
CN110969191A (en) * | 2019-11-07 | 2020-04-07 | 吉林大学 | Glaucoma prevalence probability prediction method based on similarity maintenance metric learning method |
CN111639679A (en) * | 2020-05-09 | 2020-09-08 | 西北工业大学 | Small sample learning method based on multi-scale metric learning |
EP3944185A1 (en) * | 2020-07-23 | 2022-01-26 | INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência | Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images |
CN111858991A (en) * | 2020-08-06 | 2020-10-30 | 南京大学 | Small sample learning algorithm based on covariance measurement |
CN116529762A (en) * | 2020-10-23 | 2023-08-01 | 基因泰克公司 | Multimodal map atrophic lesion segmentation |
AU2020103938A4 (en) * | 2020-12-07 | 2021-02-11 | Capital Medical University | A classification method of diabetic retinopathy grade based on deep learning |
CN113361612A (en) * | 2021-06-11 | 2021-09-07 | 浙江工业大学 | Magnetocardiogram classification method based on deep learning |
CN113537305A (en) * | 2021-06-29 | 2021-10-22 | 复旦大学 | Image classification method based on matching network less-sample learning |
WO2023056681A1 (en) * | 2021-10-09 | 2023-04-13 | 北京鹰瞳科技发展股份有限公司 | Method for training multi-disease referral system, multi-disease referral system and method |
CN114022766A (en) * | 2021-11-04 | 2022-02-08 | 江苏农林职业技术学院 | Tea typical disease image recognition system and method based on small sample learning |
CN114283355A (en) * | 2021-12-06 | 2022-04-05 | 重庆邮电大学 | Multi-target endangered animal tracking method based on small sample learning |
CN114494195A (en) * | 2022-01-26 | 2022-05-13 | 南通大学 | Small sample attention mechanism parallel twinning method for fundus image classification |
CN114898158A (en) * | 2022-05-24 | 2022-08-12 | 杭州电子科技大学 | Small sample traffic abnormity image acquisition method and system based on multi-scale attention coupling mechanism |
CN115019089A (en) * | 2022-05-30 | 2022-09-06 | 中科苏州智能计算技术研究院 | Double-current convolutional neural network for small sample learning |
CN115170868A (en) * | 2022-06-17 | 2022-10-11 | 湖南大学 | Clustering-based small sample image classification two-stage meta-learning method |
CN115359294A (en) * | 2022-08-23 | 2022-11-18 | 上海交通大学 | Cross-granularity small sample learning method based on similarity regularization intra-class mining |
CN115458174A (en) * | 2022-09-20 | 2022-12-09 | 吉林大学 | Method for constructing intelligent diagnosis model of diabetic retinopathy |
CN115731411A (en) * | 2022-10-27 | 2023-03-03 | 西北工业大学 | Small sample image classification method based on prototype generation |
CN115910385A (en) * | 2022-11-28 | 2023-04-04 | 中科院成都信息技术股份有限公司 | Pathological degree prediction method, system, medium, equipment and terminal |
CN116525075A (en) * | 2023-04-27 | 2023-08-01 | 四川师范大学 | Thyroid nodule computer-aided diagnosis method and system based on few sample learning |
CN116824212A (en) * | 2023-05-11 | 2023-09-29 | 杭州聚秀科技有限公司 | Fundus photo classification method based on small sample learning |
CN116503668A (en) * | 2023-05-18 | 2023-07-28 | 西安交通大学 | Medical image classification method based on small sample element learning |
CN116612335A (en) * | 2023-07-18 | 2023-08-18 | 贵州大学 | Few-sample fine-granularity image classification method based on contrast learning |
CN116883157A (en) * | 2023-09-07 | 2023-10-13 | 南京大数据集团有限公司 | Small sample credit assessment method and system based on metric learning |
Non-Patent Citations (5)
Title |
---|
Lei Shi ; Bin Wang ; Junxing Zhang.A Multi-stage Transfer Learning Framework for Diabetic Retinopathy Grading on Small Data. ICC 2023 - IEEE International Conference on Communications.2023,全文. * |
吕永强 ; 闵巍庆 ; 段华 ; 蒋树强 ; .融合三元卷积神经网络与关系网络的小样本食品图像识别.计算机科学.2019,(01),全文. * |
基于二维局部敏感判别分析法的雷达目标识别;张善文;张传雷;张云龙;;电光与控制;20130401(04);全文 * |
基于聚类赋权的冲突证据组合方法;董煜,张友鹏;计算机软件及计算机应用;20230322;全文 * |
糖尿病性视网膜图像的深度学习分类方法;李琼;柏正尧;刘莹芳;;中国图象图形学报;20181016(10);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117557840A (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108806792B (en) | Deep learning face diagnosis system | |
Li et al. | Accurate retinal vessel segmentation in color fundus images via fully attention-based networks | |
CN109325942B (en) | Fundus image structure segmentation method based on full convolution neural network | |
CN110680326B (en) | Pneumoconiosis identification and grading judgment method based on deep convolutional neural network | |
CN106778687B (en) | Fixation point detection method based on local evaluation and global optimization | |
CN109858540B (en) | Medical image recognition system and method based on multi-mode fusion | |
CN109670510A (en) | A kind of gastroscopic biopsy pathological data screening system and method based on deep learning | |
US20060257031A1 (en) | Automatic detection of red lesions in digital color fundus photographs | |
CN109447962A (en) | A kind of eye fundus image hard exudate lesion detection method based on convolutional neural networks | |
CN117557840B (en) | Fundus lesion grading method based on small sample learning | |
CN107862249A (en) | A kind of bifurcated palm grain identification method and device | |
CN104636580A (en) | Health monitoring mobile phone based on human face | |
CN107563996A (en) | A kind of new discus nervi optici dividing method and system | |
CN112465905A (en) | Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning | |
CN113889267A (en) | Method for constructing diabetes diagnosis model based on eye image recognition and electronic equipment | |
CN112712122A (en) | Corneal ulcer classification detection method and system based on neural network model | |
CN113782184A (en) | Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning | |
Panda et al. | Glauconet: patch-based residual deep learning network for optic disc and cup segmentation towards glaucoma assessment | |
CN117392470B (en) | Fundus image multi-label classification model generation method and system based on knowledge graph | |
CN110766665A (en) | Tongue picture data analysis method based on strong supervision algorithm and deep learning network | |
CN114140437A (en) | Fundus hard exudate segmentation method based on deep learning | |
CN112132137A (en) | FCN-SPP-Focal Net-based method for identifying correct direction of abstract picture image | |
CN109711306B (en) | Method and equipment for obtaining facial features based on deep convolutional neural network | |
CN116092667A (en) | Disease detection method, system, device and storage medium based on multi-mode images | |
CN115760707A (en) | Skin damage image intelligent classification device based on self-supervision learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |