CN114723674A - Glaucoma auxiliary screening system based on decoupling training and reasoning - Google Patents

Glaucoma auxiliary screening system based on decoupling training and reasoning Download PDF

Info

Publication number
CN114723674A
CN114723674A CN202210257364.2A CN202210257364A CN114723674A CN 114723674 A CN114723674 A CN 114723674A CN 202210257364 A CN202210257364 A CN 202210257364A CN 114723674 A CN114723674 A CN 114723674A
Authority
CN
China
Prior art keywords
fundus image
training
module
neural network
glaucoma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210257364.2A
Other languages
Chinese (zh)
Inventor
曾明如
涂佳昊
赖平红
祝琴
李钰瑾
曾佳欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Original Assignee
Nanchang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University filed Critical Nanchang University
Priority to CN202210257364.2A priority Critical patent/CN114723674A/en
Publication of CN114723674A publication Critical patent/CN114723674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • Computational Linguistics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Artificial Intelligence (AREA)
  • Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a glaucoma auxiliary screening system based on decoupling training and reasoning, which comprises a control module, a fundus image acquisition module, a fundus image preprocessing module, a neural network reasoning module and an output module, wherein the control module is used for acquiring a fundus image; the fundus image acquisition module is used for acquiring a fundus image of a user; the fundus image preprocessing module is used for preprocessing the acquired fundus image; the neural network reasoning module is used for classifying and screening the fundus images; the output module is used for judging whether the user has glaucoma symptoms or not according to the classified screening result and giving a treatment suggestion; the control module is used for controlling the fundus image acquisition, the preprocessing, the classification screening and the result output of each module. The invention improves the mobility of the glaucoma screening system, meets the requirement of large-scale glaucoma screening, enables the glaucoma screening to be anytime and anywhere, has low cost and low threshold, and has important significance for early prevention of glaucoma.

Description

Glaucoma auxiliary screening system based on decoupling training and reasoning
Technical Field
The invention relates to the field of artificial intelligence and medical imaging, in particular to a glaucoma auxiliary screening system based on decoupling training and reasoning.
Background
Glaucoma is an optic neuropathy characterized by progressive degeneration of retinal ganglion cells, visual field loss and visual deterioration, and is one of the main causes of irreversible blindness in China. According to statistics, the number of glaucoma patients in the country is as many as 1582 thousands, and scholars predict that the number of glaucoma patients in the world will reach 1 hundred million 1182 thousands in 2040 years.
Glaucoma is found in time in the early stage and reasonable treatment means are adopted, so that most patients can stop the progress of the disease and keep certain eyesight. However, the disease often has no obvious symptoms in the early stage, and many patients cannot be diagnosed until the late stage, so that treatment is delayed and vision is lost.
Clinical examination methods for the retinal retina include fundus images, optical coherence tomography, and the like. The fundus images have the advantages of low cost, simplicity in operation and strong mobility, but the fundus images are collected through a fundus camera clinically, and the traditional fundus camera is large in size and not suitable for large-scale screening. The fundus images need to be diagnosed by a professional doctor, the change of the forms of the fundus optic disk and the retinal nerve fiber layer in the fundus images of the glaucoma patients at the early stage is difficult to be distinguished by naked eyes, and misdiagnosis is easy to occur if the quality of the acquired fundus images is not high enough.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a glaucoma auxiliary screening system based on decoupling training and reasoning by combining the actual requirements of glaucoma screening.
In order to achieve the above object, on one hand, the invention provides a glaucoma auxiliary screening system based on decoupling training and reasoning, which comprises a control module, and a fundus image acquisition module, a fundus image preprocessing module, a neural network reasoning module and an output module which are respectively connected with the control module;
the fundus image acquisition module is used for acquiring images of the fundus of the user;
the fundus image preprocessing module is used for preprocessing the acquired fundus images so as to enhance the image quality and enable the neural network reasoning module to identify the image information more easily;
the neural network reasoning module is used for classifying and screening the preprocessed fundus images;
the output module is used for judging whether the user has glaucoma symptoms or not according to the classification screening result and giving treatment suggestions when the user has the glaucoma symptoms;
the control module is used for controlling the fundus image acquisition, preprocessing, classification screening and result output of the modules.
The fundus image preprocessing module comprises a color processing unit, a fundus image space exchange unit and an image enhancement processing unit;
the color processing unit is used for extracting RBG characteristics of the fundus images and adjusting the brightness, contrast, saturation and sharpness of the images;
the fundus image space exchange unit is used for cutting, rotating, turning and zooming fundus images;
the fundus image preprocessing module comprises a color processing unit, a fundus image space exchange unit and an image enhancement processing unit;
the color processing unit is used for extracting RBG characteristics of the fundus images and adjusting the brightness, contrast, saturation and sharpness of the images;
the fundus image space exchange unit is used for cutting, rotating, overturning and scaling the fundus image;
the image enhancement processing unit is used for carrying out noise reduction processing on the image so as to improve the image quality.
The fundus image acquisition module acquires a fundus image of a user through a fundus image acquisition camera.
The construction process of the neural network model is as follows:
step 1, acquiring a large number of fundus image samples by fundus image acquisition equipment, and labeling and classifying the acquired fundus image samples;
step 2, preprocessing the fundus image sample;
and 3, training the neural network model by utilizing the preprocessed fundus image samples, decoupling the training model and the reasoning model by using a structural parameterization algorithm of the obtained neural network model, and compressing to obtain the target neural network model.
In the step 1, a large number of fundus image samples including fundus images of glaucoma patients with different symptom degrees and fundus images of normal people are collected, after the fundus image samples are labeled, all the image samples are divided into three types, namely training samples, test samples and verification samples, wherein the training samples account for half of the total number of samples, and the test samples and the verification samples respectively account for one fourth of the total number of samples.
The training process of the neural network model in the step 3 is as follows:
s31, training a neural network by using the preprocessed fundus image data, wherein the training model adopts a multi-branch contraction type neural network model to realize fusion of multi-scale features, improve the feature utilization rate and simultaneously prevent the problems of gradient disappearance, degradation and feature loss along with the deepening of the network layer number;
s32, the loss function uses a cross entropy function, and after the convergence of the loss function, the training is finished to obtain a trained neural network model;
and S33, using the expanded structural parameterization algorithm to equate the trained neural network model parameters into a simple single-branch neural network model for reasoning.
The loss function in step S32 uses a cross entropy function, the expression of which is
Figure BDA0003548914010000031
Wherein p is the actual value of the sample, and q is the predicted value.
The structural parameterization algorithm in the step S33 obtains the beneficial effect of the invention according to the linear operation rule of convolution operation
The glaucoma auxiliary screening system based on the decoupling training and reasoning adopts the multi-branch shrinkage type neural network model for training, and the multi-scale features are mined and fused in the training process, so that the feature utilization rate is improved. After training is finished, the parameters of the multi-branch contractive neural network model are equivalently transformed into a single-branch simple neural network model by using and expanding a structural reparameterization algorithm for reasoning, so that the complexity of the neural network model is reduced while the accuracy of the neural network model is ensured, and the operation speed is accelerated. The portable glaucoma auxiliary screening system based on the multi-branch shrinkage network capable of decoupling training and reasoning adopts a deep learning method, and the strong characteristic extraction capability of the system is utilized to solve the problem that early glaucoma symptoms are not obvious and diagnosis omission is easily caused, so that non-professionals can use the system to screen glaucoma.
Drawings
FIG. 1 is a schematic diagram of the components of a decoupled training and reasoning based glaucoma screening system of the present invention;
FIG. 2 is a flow chart of the present invention for constructing a neural network model;
FIG. 3 is a flow chart of the fundus image sample pre-processing in an embodiment of the present invention;
FIG. 4 is a diagram illustrating the effect of partial color processing according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect of partial spatial transformation according to an embodiment of the present invention;
FIG. 6 is a flow chart of neural network model training and structure reparameterization in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a TrapNet network structure according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a neural network after structural parameterization processing according to an embodiment of the present invention;
fig. 9 is a flowchart of a glaucoma screening system according to the present invention applied to follow-up monitoring of a patient.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without any inventive step, are within the scope of the present invention.
Example (b): see fig. 1-9.
As shown in fig. 1, the glaucoma auxiliary screening system based on decoupling training and reasoning of the present invention comprises a control module, and a fundus image acquisition module, a fundus image preprocessing module, a neural network reasoning module, and an output module, which are respectively connected to the control module;
the fundus image acquisition module is used for acquiring a fundus image of a user;
the fundus image preprocessing module is used for preprocessing the acquired fundus images so as to enhance the image quality and enable the neural network reasoning module to identify the image information more easily;
the neural network reasoning module is used for classifying and screening the preprocessed fundus images;
the output module is used for judging whether the user has glaucoma symptoms or not according to the classification screening result and giving treatment suggestions when the user has the glaucoma symptoms;
the control module is used for controlling the fundus image acquisition, the preprocessing, the classification screening and the result output of each module.
As shown in fig. 3, the fundus image preprocessing module includes a color processing unit, a fundus image space exchanging unit, and an image enhancement processing unit;
as shown in fig. 4, the color processing unit is used for RBG feature extraction and image brightness, contrast, saturation, sharpness adjustment of the fundus image;
as shown in fig. 5, the fundus image space exchanging unit is used to crop, rotate, flip, and zoom a fundus image;
the image enhancement processing unit is used for carrying out noise reduction processing on the image so as to improve the image quality.
The fundus image acquisition module acquires a fundus image of a user through a fundus image acquisition camera.
As shown in fig. 2, the neural network model is constructed as follows:
step 1, acquiring a large number of fundus image samples by fundus image acquisition equipment, and labeling and classifying the acquired fundus image samples;
step 2, preprocessing the fundus image sample;
and 3, training the neural network model by utilizing the preprocessed fundus image samples, decoupling the training model and the reasoning model by using a structural parameterization algorithm of the obtained neural network model, and compressing to obtain the target neural network model.
In the step 1, a large number of fundus image samples including fundus images of glaucoma patients with different symptom degrees and fundus images of normal people are collected, after the fundus image samples are labeled, all the image samples are divided into three types, namely training samples, test samples and verification samples, wherein the training samples account for half of the total number of samples, and the test samples and the verification samples respectively account for one fourth of the total number of samples.
As shown in fig. 6, the training process of the neural network model in step 3 is as follows:
s31, training a neural network by using the preprocessed fundus image data, wherein the training model adopts a multi-branch contraction type neural network model to realize fusion of multi-scale features, improve the feature utilization rate and simultaneously prevent the problems of gradient disappearance, degradation and feature loss along with the deepening of the network layer number;
s32, using a cross entropy function as a loss function, and finishing training after the loss function is converged to obtain a trained neural network model;
and S33, using the expanded structural parameterization algorithm to equate the trained neural network model parameters into a simple single-branch neural network model for reasoning.
Specifically, in order to realize the fusion of multi-scale features and improve the feature utilization rate, a shallow-layer network block Trap V1 comprises a plurality of branches containing 1x1 convolutions and different numbers of 3x3 convolutions (wherein the effect of two continuous 3x3 convolutions can be equivalent to 5x5 convolution, and the effect of three continuous 3x3 convolutions can be equivalent to 7x7 convolution), so as to extract features of different scales, each branch uses 1x1 convolution compression channels first, the network efficiency is prevented from being influenced by the number of excessively thick channels, uniform data distribution of a BN layer (Batch Normalization) is introduced behind each convolution module, and the training speed is accelerated. And each network block is added with a jump connection, so that the gradient is transmitted while the low-level characteristics are kept, and the problems of disappearance of the gradient and degradation along with the deepening of the network layer number are prevented. As the network layer number increases, the receptive fields of a plurality of continuous 3x3 convolutions are larger, and characteristics are easily lost, so that three continuous 3x3 convolutions are discarded in Trap V2, two continuous 3x3 convolutions are discarded in Trap V3, and the network structure of Trap net is shown in fig. 7.
The loss function in step S32 uses a cross entropy function, an expression of which
Figure BDA0003548914010000071
Wherein p is the actual value of the sample, and q is the predicted value.
Specifically, in the process of back propagation, if the initial derivative of the activation function is small, the update amplitude of the whole gradient is small, the convergence speed is slow, and the gradient of the cross entropy loss function for the last layer is not related to the derivative of the activation function and is only related to the difference between the output value and the true value, so that the update speed of the weight is accelerated by using the cross entropy loss function, and the convergence speed is further accelerated.
The structural reparameterization algorithm in step S33 is obtained according to the linear operation rule of convolution operation.
Specifically, for the structure reparameterization algorithm, there are:
for an NxN convolution with an input channel A and an output channel B, the parameters are a fourth-order tensor We RB ×A×N×NWith a bias of b ∈ RB. Assuming that its input is I ∈ RA×H×WIts output is O ∈ RB×H′×W′Copy bias b to REP (b) ε RB×H′×W′As used herein
Figure BDA0003548914010000072
To represent a convolution operator, the NxN can be expressed as
Figure BDA0003548914010000073
The output at (h, w) of the ith channel is
Figure BDA0003548914010000074
The formula shows that the convolution operation is actually a linear operation, and the linear operation conforms to the combination law and the distribution law, so that the combination law and the distribution law of the convolution operation can be expressed in the following forms
Figure BDA0003548914010000075
Figure BDA0003548914010000076
Therefore, the method of the structure reparameterization is provided by utilizing the linear operation rule of the convolution operation.
Further, after the decoupling training and reasoning process of the structural parameterization, the neural network model is divided into a training model and a reasoning model, and the structure of the whole network model is shown in fig. 8. The training model first uses a Trap V1 network block containing multi-scale branches, and gradually uses Trap V2 and Trap V3 with smaller receptive fields as the layer number grows deeper to prevent feature loss. After the training is finished, the TrapNet module is equivalent to the convolution of 3x3 by using a structural parameterization method, so that the parameter quantity is greatly reduced, and the operation cost during reasoning is reduced.
As shown in fig. 9, which is a flowchart of the glaucoma screening system applied to the follow-up monitoring of the patient, when the user is diagnosed with glaucoma by the system, the staff will recommend the user to go to the hospital for confirmation, if the user is further diagnosed in the hospital, the staff will build files according to the detection condition, and store the cup-to-disk ratio of the user calculated from the fundus image into the file, and at the same time, the user who builds the files regularly takes fundus images, and judge whether the disease condition of the user deteriorates by comparing the cup-to-disk ratio recorded each time, and if the disease condition deteriorates, recommend the user to go to the hospital for treatment, thereby saving the time and examination cost of the user.
In conclusion, the invention utilizes the multi-branch contractive neural network model for training, excavates and fuses multi-scale features in the training process, improves the feature utilization rate, equivalently transforms the parameters of the multi-branch contractive neural network model into the single-branch simple neural network model for reasoning by using and expanding the structure reparameterization algorithm after the training is finished, reduces the complexity of the neural network model while ensuring the precision of the neural network model, and accelerates the model operation speed
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A glaucoma auxiliary screening system based on decoupling training and reasoning is characterized by comprising a control module, and a fundus image acquisition module, a fundus image preprocessing module, a neural network reasoning module and an output module which are respectively connected with the control module;
the fundus image acquisition module is used for acquiring a fundus image of a user;
the fundus image preprocessing module is used for preprocessing the acquired fundus images so as to enhance the image quality and enable the neural network reasoning module to identify the image information more easily;
the neural network reasoning module is used for classifying and screening the preprocessed fundus images;
the output module is used for judging whether the user has glaucoma symptoms or not according to the classification screening result and giving treatment suggestions when the user has the glaucoma symptoms;
the control module is used for controlling the fundus image acquisition, preprocessing, classification screening and result output of the modules.
2. The glaucoma-aided screening system based on decoupleable training and reasoning according to claim 1, wherein the fundus image preprocessing module comprises a color processing unit, a fundus image space exchanging unit and an image enhancement processing unit;
the color processing unit is used for extracting RBG characteristics of the fundus images and adjusting the brightness, contrast, saturation and sharpness of the images;
the fundus image space exchange unit is used for cutting, rotating, overturning and scaling the fundus image;
the image enhancement processing unit is used for carrying out noise reduction processing on the image so as to improve the image quality.
3. The glaucoma auxiliary screening system based on decoupled training and reasoning of claim 1 wherein the fundus image capturing module captures a fundus image of the user via a fundus image capturing camera.
4. The decoupled training and reasoning based glaucoma auxiliary screening system of claim 3 wherein said neural network model is constructed as follows:
step 1, acquiring a large number of fundus image samples by fundus image acquisition equipment, and labeling and classifying the acquired fundus image samples;
step 2, preprocessing the fundus image sample;
and 3, training the neural network model by utilizing the preprocessed fundus image samples, decoupling the training model and the reasoning model by using a structural parameterization algorithm of the obtained neural network model, and compressing to obtain the target neural network model.
5. The glaucoma auxiliary screening system based on the decoupleable training and reasoning of claim 6, wherein a large number of fundus image samples including fundus images of glaucoma patients with different symptom degrees and fundus images of normal persons are collected in step 1, and after labeling of the fundus image samples, all the image samples are divided into three types, namely training samples, testing samples and verification samples, wherein the training samples account for half of the total samples, and the testing samples and the verification samples each account for one fourth of the total samples.
6. The glaucoma-aided screening system based on decoupleable training and reasoning according to claim 6, wherein the training process for the neural network model in step 3 is as follows:
s31, training a neural network by using the preprocessed fundus image data, wherein the training model adopts a multi-branch contraction type neural network model to realize fusion of multi-scale features, improve the feature utilization rate and simultaneously prevent the problems of gradient disappearance, degradation and feature loss along with the deepening of the network layer number;
s32, using a cross entropy function as a loss function, and finishing training after the loss function is converged to obtain a trained neural network model;
and S33, using the expanded structural parameterization algorithm to equate the trained neural network model parameters into a simple single-branch neural network model for reasoning.
7. The decoupled training and reasoning based glaucoma aiding screening system of claim 6 wherein the loss function in step S32 uses a cross entropy function expressed as:
Figure FDA0003548914000000021
wherein p is the actual value of the sample, and q is the predicted value.
8. The decoupled training and reasoning based glaucoma auxiliary screening system of claim 6 wherein said structural reparameterization algorithm of step S33 is derived from the linear operation rule of convolution operation.
CN202210257364.2A 2022-03-16 2022-03-16 Glaucoma auxiliary screening system based on decoupling training and reasoning Pending CN114723674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210257364.2A CN114723674A (en) 2022-03-16 2022-03-16 Glaucoma auxiliary screening system based on decoupling training and reasoning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210257364.2A CN114723674A (en) 2022-03-16 2022-03-16 Glaucoma auxiliary screening system based on decoupling training and reasoning

Publications (1)

Publication Number Publication Date
CN114723674A true CN114723674A (en) 2022-07-08

Family

ID=82238646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210257364.2A Pending CN114723674A (en) 2022-03-16 2022-03-16 Glaucoma auxiliary screening system based on decoupling training and reasoning

Country Status (1)

Country Link
CN (1) CN114723674A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476283A (en) * 2020-03-31 2020-07-31 上海海事大学 Glaucoma fundus image identification method based on transfer learning
CN113077889A (en) * 2021-03-31 2021-07-06 武开寿 Artificial intelligence ophthalmopathy screening service method and system
CN113657124A (en) * 2021-07-14 2021-11-16 内蒙古工业大学 Multi-modal Mongolian Chinese translation method based on circulation common attention Transformer
CN114021603A (en) * 2021-10-25 2022-02-08 哈尔滨工程大学 Radar signal modulation mode identification method based on model reparameterization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476283A (en) * 2020-03-31 2020-07-31 上海海事大学 Glaucoma fundus image identification method based on transfer learning
CN113077889A (en) * 2021-03-31 2021-07-06 武开寿 Artificial intelligence ophthalmopathy screening service method and system
CN113657124A (en) * 2021-07-14 2021-11-16 内蒙古工业大学 Multi-modal Mongolian Chinese translation method based on circulation common attention Transformer
CN114021603A (en) * 2021-10-25 2022-02-08 哈尔滨工程大学 Radar signal modulation mode identification method based on model reparameterization

Similar Documents

Publication Publication Date Title
Li et al. Automatic detection of diabetic retinopathy in retinal fundus photographs based on deep learning algorithm
Ran et al. Cataract detection and grading based on combination of deep convolutional neural network and random forests
Haloi Improved microaneurysm detection using deep neural networks
Khan et al. Cataract detection using convolutional neural network with VGG-19 model
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
Nasir et al. Deep DR: detection of diabetic retinopathy using a convolutional neural network
Hassan et al. Exploiting the transferability of deep learning systems across multi-modal retinal scans for extracting retinopathy lesions
Bilal et al. Diabetic retinopathy detection using weighted filters and classification using CNN
Wu et al. Automatic cataract detection with multi-task learning
Triyadi et al. Deep learning in image classification using vgg-19 and residual networks for cataract detection
CN117338234A (en) Diopter and vision joint detection method
Yamuna et al. Detection of abnormalities in retinal images
CN117426748A (en) MCI detection method based on multi-mode retina imaging
Khan et al. Screening fundus images to extract multiple ocular features: A unified modeling approach
Nair et al. Multi-labelled ocular disease diagnosis enforcing transfer learning
Brancati et al. Segmentation of pigment signs in fundus images for retinitis pigmentosa analysis by using deep learning
Ali et al. Cataract disease detection used deep convolution neural network
Patra et al. Diabetic Retinopathy Detection using an Improved ResNet50-InceptionV3 Structure
Calderon et al. CNN-based quality assessment for retinal image captured by wide field of view non-mydriatic fundus camera
CN114723674A (en) Glaucoma auxiliary screening system based on decoupling training and reasoning
Deepa et al. Pre-Trained Convolutional Neural Network for Automated Grading of Diabetic Retinopathy
Kazi et al. Processing retinal images to discover diseases
Nguyen et al. Cataract Detection using Hybrid CNN Model on Retinal Fundus Images
Anggraeni et al. Detection of the emergence of exudate on the image of retina using extreme learning machine method
Suwandi et al. A Systematic Literature Review: Diabetic Retinopathy Detection Using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination