CN113768460B - Fundus image analysis system, fundus image analysis method and electronic equipment - Google Patents

Fundus image analysis system, fundus image analysis method and electronic equipment Download PDF

Info

Publication number
CN113768460B
CN113768460B CN202111059503.2A CN202111059503A CN113768460B CN 113768460 B CN113768460 B CN 113768460B CN 202111059503 A CN202111059503 A CN 202111059503A CN 113768460 B CN113768460 B CN 113768460B
Authority
CN
China
Prior art keywords
fundus
arc
category
module
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111059503.2A
Other languages
Chinese (zh)
Other versions
CN113768460A (en
Inventor
杨志文
王欣
贺婉佶
姚轩
黄烨霖
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd, Beijing Airdoc Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202111059503.2A priority Critical patent/CN113768460B/en
Publication of CN113768460A publication Critical patent/CN113768460A/en
Application granted granted Critical
Publication of CN113768460B publication Critical patent/CN113768460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0033Operational features thereof characterised by user input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Surgery (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the invention provides a fundus image analysis system, a fundus image analysis method and electronic equipment, wherein the system comprises a feature extraction module, a fundus prediction module and a segmentation prediction module, wherein the feature extraction module samples fundus images to be analyzed to extract fundus feature images; the fundus prediction module analyzes fundus categories corresponding to fundus images according to the fundus feature map, wherein the fundus categories comprise normal fundus and various myopia-associated fundus; the segmentation prediction module samples the fundus feature map to analyze a segmentation prediction map corresponding to a fundus image, wherein the segmentation prediction map indicates a class of each pixel in the fundus image, and the classes of the pixels comprise a background pixel class, a video disc pixel class, a plurality of arc-shaped spot pixel classes and a plurality of atrophy spot pixel classes. The invention provides a machine learning technology to solve the problem that the classification of the traditional myopia fundus only has the rough category related to pathological myopia, so as to provide better diagnosis and treatment or daily eye suggestions for detected personnel.

Description

Fundus image analysis system, fundus image analysis method and electronic equipment
Technical Field
The present invention relates to a fundus analysis method for myopia, and more particularly, to a fundus image analysis system, a fundus image analysis method, and an electronic apparatus.
Background
In recent years, the age of myopia occurring in teenager groups in China is lower and lower, the problem of vision of teenagers is a main problem which puzzles a plurality of parents schools, and the prevention and control of vision of teenagers are also improved to the national strategy level.
Myopia often occurs due to poor ocular habits plus a heavy academic burden, and most teenagers' groups have excessive eyes, resulting in prolonged stretching of the axes of the eyes, and corresponding image features exist on fundus photographs, for example, the common leopard-like fundus is the stretching of the axes of the eyes, resulting in thinning of the retina, and the choroid is displayed on fundus images. In the early stage of myopia, the fundus image is slightly changed, the eyeballs are slightly deformed, so that the peripheral area of the optic disc is pulled, different retina tissue layers are exposed, fundus arc spots are formed, and the color morphological characteristics of different tissue layers on the fundus image are slightly different. The highly myopic fundus also tends to bring about more serious changes in fundus structure, such as diffuse atrophy, macular atrophy. These are ocular fundus manifestations at higher myopic power. Limited to the lack of specialized ophthalmologists in China, most eye vision detection mechanisms are not provided with fundus cameras, and fundus examination is carried out while optometry is carried out, so that myopia fundus lesions are not found in time and cannot be guided correctly. Thus, in addition to conventional eye chart testing, eye axis length measurement, retinal fundus photographs can be examined. The eye fundus image characteristics under different myopia conditions are accurately monitored, and the eye fundus image monitoring system is also an important ring for vision prevention and control of teenagers.
However, the current myopic fundus is often limited to identifying whether the current fundus is abnormal or not, and more detailed analysis cannot be given for the special condition of the myopic fundus, so that related personnel have difficulty in purposefully providing better diagnosis and treatment or daily eye suggestions for myopic patients according to specific conditions.
Disclosure of Invention
It is therefore an object of the present invention to overcome the above-described drawbacks of the prior art and to provide a fundus image analysis system, a fundus image analysis method, and an electronic apparatus.
The invention aims at realizing the following technical scheme:
according to a first aspect of the present invention, there is provided a fundus image analysis system comprising a feature extraction module, a fundus prediction module and a segmentation prediction module, wherein the feature extraction module samples a fundus image to be analyzed to extract a fundus feature map; the fundus prediction module analyzes fundus categories corresponding to fundus images according to fundus feature maps, wherein the fundus categories comprise normal fundus and various myopia associated fundus; the segmentation prediction module samples the fundus feature map to analyze a segmentation prediction map corresponding to the fundus image, which indicates a class of each pixel in the fundus image, the classes of pixels including a background pixel class, a optic disc pixel class, a plurality of arc-shaped patch pixel classes, and a plurality of atrophy patch pixel classes.
In some embodiments of the present invention, the feature extraction module downsamples the fundus image multiple times to obtain a fundus feature map; the fundus predicting module performs up-sampling on the fundus characteristic map for a plurality of times to a multi-channel segmentation map, and analyzes the multi-channel segmentation map to obtain a segmentation predicting map.
In some embodiments of the invention, the system is trained in the following manner: acquiring training data, which comprises a plurality of fundus pictures, fundus category labels and pixel category labels; and training the system by using training data, wherein fundus classifying sub-loss is calculated according to the output of the fundus prediction module and fundus category labels, fundus segmentation sub-loss is calculated according to the output of the segmentation prediction module and pixel category labels, total loss is calculated according to the fundus classifying sub-loss and fundus segmentation sub-loss, and gradient calculation and parameter updating are performed on the feature extraction module, the fundus prediction module and the segmentation prediction module based on the total loss.
In some embodiments of the present invention, the system further includes an eye classification module and a quantization analysis module, wherein the eye classification module determines an eye corresponding to the fundus image according to the fundus feature map, the eye being a left eye or a right eye; the quantitative analysis module performs quantitative analysis according to the fundus category and the segmentation prediction graph corresponding to the fundus image or performs quantitative analysis according to the fundus category, the eye category and the segmentation prediction graph corresponding to the fundus image to obtain various quantitative indexes.
In some embodiments of the invention, the system is trained in the following manner: acquiring training data including a plurality of fundus pictures (fundus pictures in the training data are also fundus images, where names differ only for distinction), fundus category labels, eye category labels, and pixel category labels; and training the system by using training data, wherein fundus classifying sub-loss is calculated according to the output of the fundus predicting module and fundus type labels, fundus classifying sub-loss is calculated according to the output of the fundus classifying module and fundus type labels, fundus dividing sub-loss is calculated according to the output of the dividing predicting module and pixel type labels, total loss is calculated according to the fundus classifying sub-loss, fundus classifying sub-loss and fundus dividing sub-loss, and gradient calculation and parameter updating are performed on the feature extracting module, the fundus predicting module, the dividing predicting module and the fundus classifying module based on the total loss.
In some embodiments of the invention, the total loss is calculated as follows:
L all =α*L seg +β*L clf-1 +γ*L clf-2
wherein L is all Indicating total loss, L seg Indicating fundus segmentation loss, L clf-1 Indicating fundus classified sub-loss, L clf-2 Indicating eye classification sub-loss, alpha tableThe weight of the fundus split sub-loss is shown, β represents the weight of the fundus classification sub-loss, and γ represents the weight corresponding to the ocular classification sub-loss.
In some embodiments of the invention, the plurality of quantitative indicators includes a number indicator and an area indicator of the lesion, and the quantitative analysis module is configured to:
when the fundus category corresponding to the fundus image is any myopia-associated fundus, grading the degree of fundus lesions corresponding to the fundus image according to at least one quantitative index of the quantity index and the area index of the lesions, so as to obtain grading indexes.
In some embodiments of the present invention, when the fundus category corresponding to the fundus image is any myopia-associated fundus, classifying the degree of fundus lesions corresponding to the fundus image according to at least one quantization index of a number index and an area index of lesions, to obtain a classification index includes:
determining a lesion value of the fundus lesion according to at least one quantitative index of a quantity index and an area index of the lesion, and determining a grade of the fundus lesion according to a grading threshold interval in which the lesion value is located, wherein a grading threshold for constructing the grading threshold interval is obtained in the following manner:
randomly sampling part of samples from the collected sample sets containing various age groups, various regions and various myopia degrees; determining a sampling interval according to the hierarchical granularity and the number of all samples sampled; and arranging the lesion values corresponding to all the sampled samples in order of magnitude, and sampling at intervals according to the sampling intervals to determine a plurality of grading thresholds for grading.
In some embodiments of the invention, the area indicator comprises: the minimum area of the focus, the maximum area of the focus, the total area of the focus, and the ratio of the total area of the focus to the total area of the optic disc.
In some embodiments of the present invention, when the fundus category corresponding to the fundus image is an arc-shaped plaque fundus, the degree of the arc-shaped plaque lesion corresponding to the fundus image is classified according to a ratio of a focus total area of the arc-shaped plaque to a disc total area.
In some embodiments of the present invention, the total focal area of the arc spot is a weighted area, wherein the temporal side and the nasal side of the optic disc on the split prediction map are determined according to the eye and the split prediction map is divided into a plurality of sub-regions according to the temporal side and the nasal side, and the areas of the arc spots in the split prediction map are weighted and summed according to the regional weights of the sub-regions and the class weights of the plurality of classes of the pixels of the arc spot to obtain the weighted area.
Preferably, the region weight of the region relatively closer to the nose side is greater than the region weight of the region relatively farther from the nose side.
Preferably, the plurality of arc spot pixel categories include: pigment arc-shaped spots, choroid arc-shaped spots, mixed arc-shaped spots and sclera arc-shaped spots, wherein the category weights of various arc-shaped spot pixel categories are the category weights corresponding to the pigment arc-shaped spots, the choroid arc-shaped spots, the mixed arc-shaped spots and the sclera arc-shaped spots from small to large in sequence.
In some embodiments of the invention, the plurality of myopia-associated fundus is a combination of classes in the arc-shaped plaque fundus, diffuse atrophy fundus, patch atrophy fundus, macular area atrophy fundus, leopard fundus; the multiple arc-shaped spot pixel categories are combinations of categories in pigment arc-shaped spots, choroidal arc-shaped spots, mixed arc-shaped spots and sclera arc-shaped spots; the various groups of plaque pixels include diffuse atrophy and plaque atrophy.
According to a second aspect of the present invention, there is provided a fundus image analysis method implemented based on the system of the first aspect, the method comprising: acquiring a fundus image to be analyzed; sampling the fundus image by a feature extraction module to extract a fundus feature map; analyzing fundus categories corresponding to fundus images by a fundus prediction module according to the fundus feature map; sampling, by a segmentation prediction module, the fundus feature map to analyze a corresponding segmentation prediction map of the fundus image, which indicates a class of each pixel in the fundus image; outputting the fundus category and the segmentation prediction map obtained through analysis.
In some embodiments of the invention, the system further comprises an eye classification module and a quantization analysis module, the method further comprising: analyzing the eye condition corresponding to the fundus image by an eye condition classification module according to the fundus characteristic diagram, wherein the eye condition is a left eye or a right eye; the quantitative analysis module performs quantitative analysis according to the fundus category and the segmentation prediction graph corresponding to the fundus image or performs quantitative analysis according to the fundus category, the eye category and the segmentation prediction graph corresponding to the fundus image; outputting various quantization indexes.
Some details of the method may refer to the foregoing system embodiments, which are not described herein.
According to a third aspect of the present invention, there is provided an electronic device comprising: one or more processors; and a memory, wherein the memory is for storing executable instructions; the one or more processors are configured to execute the executable instructions to implement the method of the second aspect.
Compared with the prior art, the invention has the advantages that:
1. the invention provides a machine learning technology for solving the problem that the classification of the traditional myopia fundus only has the rough category related to pathological myopia;
2. the invention integrates near vision fundus classification, eye classification and pixel level segmentation into a unified end-to-end training frame, and fully plays the advantages of each model;
3. aiming at myopia fundus of different stages of early myopia, high myopia, pathological myopia and the like, the invention utilizes a segmentation prediction model to quantitatively classify atrophic fundus (corresponding to the high myopia, the pathological myopia) and arc-shaped zebra fundus (corresponding to the early myopia fundus) with finer granularity after random sampling analysis of the whole population selects a threshold value;
4. according to the center of the optic disc area and the eye types, the fundus image is divided into different areas, and different weights are given to different arc-shaped spot types of each area, so that quantitative indexes capable of reflecting the state of an arc-shaped spot more accurately and intuitively are obtained.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 is a schematic fundus image corresponding to various fundus lesion manifestations;
fig. 2 is a block diagram of a fundus image analysis system according to an embodiment of the present invention;
FIG. 3 is an exemplary implementation of a feature extraction module according to an embodiment of the invention;
FIG. 4 is another illustrative implementation of a feature extraction module in accordance with an embodiment of the invention;
FIG. 5 is a schematic diagram of a second layer of the feature extraction module of the embodiment of FIG. 3 according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a U-shaped network structure formed by a feature extraction module and a segmentation prediction module according to an embodiment of the present invention;
fig. 7 is a block diagram of a fundus image analysis system according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of training loss of a fundus image analysis system according to an embodiment of the present invention;
fig. 9 is a schematic view of region division of the fundus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by way of specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As mentioned in the background section, current myopic fundus is often limited to identifying whether the current fundus is abnormal or not, and more detailed analysis cannot be given for the specific situation of the myopic fundus, so that it is difficult for related personnel to provide better diagnosis and treatment or daily eye advice for myopic patients in a targeted manner according to specific situations. In order to improve, the invention extracts a corresponding fundus characteristic diagram aiming at an original fundus image, predicts that the fundus image is a normal fundus or a myopia-related fundus according to the fundus characteristic diagram, and carries out multi-channel segmentation on the fundus characteristic diagram so as to predict based on the multi-channel segmentation diagram to obtain a segmentation prediction diagram, wherein a prediction value of each pixel of the segmentation prediction diagram corresponds to one of a background pixel category, a video disc pixel category, a plurality of arc-shaped plaque pixel categories and a plurality of atrophy plaque pixel categories; therefore, a doctor or an optical lens operator can know the myopia condition of the eyeground of the detected person and the distribution condition of various arc-shaped spot pixel categories and various atrophy spot pixel categories in the eyeground image, so that better diagnosis and treatment or daily eye suggestions are provided for the detected person.
Before describing embodiments of the present invention in detail, some of the terms used therein are explained as follows:
Fundus refers to the area of the back of the eye, including the anatomy of the retina, papilla, macula, and central retinal artery.
Optic Disc (Optic Disc), which is referred to as the Optic Disc, also known as the disk. The retina has a pale red disk-like structure with a diameter of about 1.5mm from the macula to the nasal side, and is called optic nerve disk, abbreviated as optic disk.
The pigment arc-shaped spot refers to an arc-shaped spot type in a black crescent shape formed on the fundus. The eye axis in early myopia stage slightly pulls the pigment epithelial cells along the edge of the temporal side disk to gather to form a black crescent arc shape; the fundus image corresponding to the relevant symptom can be referred to fig. 1a.
Choroidal arcuated plaque refers to the type of arcuated plaque formed on the fundus due to exposure of the choroid. During high myopia and pathologic myopia, as the eyeball stretches backwards, the sclera expands and is involved, retinal pigment epithelial cells and choroid (Branch membrane) are separated from the temporal side-viewing disc and stop at a certain distance from the optic disc; the detached areas of retinal pigment epithelium are missing, exposing the underlying choroid, and appear as gray crescent-shaped areas under fundus images; the fundus image corresponding to the relevant symptom can be referred to fig. 1b.
Mixed arc spots refer to the type of arc spots that appear on the fundus where the choroid and sclera cross-expose. The involvement area is accompanied by cross exposure of the choroid and sclera, characterized by gray-white phase on the fundus image; the fundus image corresponding to the relevant symptom can be referred to fig. 1c.
Scleral arc spot refers to the type of arc spot formed on the fundus by severe scleral exposure. The mixed arc spots are relatively weak in involvement, if the involvement is heavier, the choroid is also pulled away from the optic disc, and the retinal pigment epithelium layer and the choroid in the disengaging zone are both absent, the corresponding sclera is exposed, and the characteristic white arc spots appear on fundus images; the fundus image corresponding to the relevant symptom can be referred to fig. 1d.
Diffuse atrophy refers to the type of atrophy plaques corresponding to retinal pigment epithelium and choroidal pigment disorder in the temporal region of the optic disc on the fundus. Retinal pigment epithelium and choroidal pigment disorders in the temporal region of the optic disc form isolated or multiple yellowish-white areas, irregularly shaped, small and extensive; the fundus image corresponding to the relevant symptom can be referred to fig. 1e.
The patch atrophy refers to the type of atrophy patch where some patch atrophy areas appear on the fundus. The fundus presents a small and localized isolated atrophy focus (atrophy zone), circular, white or yellowish-white, with a pigmented mass visible at its rim; the fundus image corresponding to the relevant symptom can be referred to fig. 1f.
Leopard fundus is a fundus type that indicates that a leopard-like texture is present. The myopic fundus becomes thinner and straightened after the retinal blood vessel leaves the optic disc due to the backward extension of the eyeball, and the choroidal blood vessel becomes thinner and straightened correspondingly or is obviously reduced. Meanwhile, due to pigment epithelial layer dystonia, superficial pigment disappears, and the choroidal blood vessels with red orange are more exposed to present leopard-like texture; the fundus image corresponding to the relevant symptom can be referred to fig. 1g.
Macular area atrophy refers to the type of atrophy patch that occurs in the area of the macula. Macular atrophy is a type of atrophy patch that occurs in the macular region as a patch of atrophy. Some late macular regions of pathological myopia may develop choroidal vascular occlusion, single or multiple focal pigment epithelium and choroidal capillary atrophy degeneration with pigment migration. The fundus oculi is represented by atrophy areas distributed around the macula and with different forms; the fundus image corresponding to the relevant symptom can be referred to fig. 1h.
The circular arc-shaped spot, the arc-shaped spot generally extends from the temporal region to the superior temporal, temporal and nasal sides. A small part of the arc spots extending to the nose side area appear to change in the arc spots of the annular area; the fundus image corresponding to the relevant symptom can be referred to fig. 1i.
According to an embodiment of the present invention, referring to fig. 2, there is provided a fundus image analysis system including a feature extraction module 1, a segmentation prediction module 2, and a fundus prediction module 3. The system can be installed in an electronic device such as a computer or a server.
The feature extraction module 1, which may also be referred to as an encoding module (Encoder), is a multi-layer neural network model for sampling fundus images to be analyzed to extract fundus feature maps. Preferably, the feature extraction module downsamples the fundus image a plurality of times to extract the fundus feature map. The resolution ratio of the input fundus image is reduced to 1/2 of the original resolution ratio after one time of downsampling by the module, and the number of characteristic channels is gradually increased, so that a corresponding fundus characteristic diagram is obtained. Therefore, the semantic space of high-level abstraction can be sampled, and the expression capacity of the model is improved.
Embodiment 1 of the feature extraction module 1 can be seen in fig. 3, wherein the feature extraction module 1 includes five layers of sub-networks, the fundus image is downsampled multiple times through the sub-networks of the first layer, the second layer, the third layer, the fourth layer and the fifth layer, the resolution of the image is gradually reduced, and the number of feature channels is gradually increased. The original fundus image of 600 x 3 (long x wide x characteristic channel number, the same meaning of the subsequent data, not described in detail) is processed by the first layer to obtain a 300 x 20 characteristic map 1, and then processed by the second layer, the feature map 2 of 150×150×40 is obtained, the feature map 3 of 75×75×80 is obtained through the treatment of the third layer, the feature map 4 of 38×38×160 is obtained through the treatment of the fourth layer, and the feature map 5 of 19×19×320 is obtained through the treatment of the fifth layer.
The number of sub-network layers of the feature extraction module 1, the size features of the input fundus image, and the size of the feature map can all be set according to the needs of the practitioner. Embodiment 2 of the extraction module 1 can be seen from fig. 4, wherein the feature extraction module 1 includes four layers of sub-networks, the fundus image is downsampled multiple times through the first, second, third and fourth layers of sub-networks, the resolution of the image is gradually reduced, and the number of feature channels is gradually increased. The original fundus image 400×400×3 is processed by the first layer to obtain 150×150×40 feature map 1, then processed by the second layer to obtain 75×75×80 feature map 2, then processed by the third layer to obtain 38×38×160 feature map 3, and then processed by the fourth layer to obtain 19×19×320 feature map 4.
According to other embodiments of the present invention, the feature extraction module 1 has a variety of alternative network structures, such as: the structure of the feature extraction part in the network comprises a CNN convolution block of a convolution neural network, a transducer block based on a self-attention mechanism, U-net and the like, or a combination thereof.
In one embodiment, each layer of the sub-network of the feature extraction module 1 may be a CNN convolution block based on a convolutional neural network, for example, comprising any combination of batch normalization layers (BN), convolution layers, pooling layers, activation layers, and the like. Referring to fig. 5, taking the second layer subnetwork of embodiment 1 shown in fig. 3 as an example, it may include a convolution block 1 and a convolution block 2, where the feature map 1 is input to the convolution block 1, after sequentially passing through the batch normalization layer, the convolution layer and the activation layer of the convolution block 1, and then input to the convolution block 2, and sequentially passing through the batch normalization layer, the convolution layer, the activation layer and the average pooling layer of the convolution block 2, to obtain the feature map 2. Wherein, the feature map of 300×300×20 is processed by the batch normalization layer of the convolution block 1 to obtain the feature map 1.1 of 300×300×20, then processed by the convolution layer to obtain the feature map 1.2 of 300×300×40, and then processed by the activation layer to obtain the feature map 1.3 of 300×300×40; inputting the characteristic map 1.3 into the convolution block 2, obtaining a characteristic map 1.4 of 300 x 40 through the batch normalization layer processing of the convolution block 2, obtaining a characteristic map 1.5 of 300 x 40 through the convolution layer processing, obtaining a characteristic map 1.6 of 300 x 40 through the activation layer processing, and obtaining a characteristic map 2 of 150 x 40 through the average pooling layer processing. It can be seen that the convolution process of convolution block 1 is configured to adjust the number of characteristic channels to twice that of the original, i.e. from 20 to 40; the convolution processing procedure of the convolution block 2 is configured to keep the number of characteristic channels unchanged; the activation function employed by both activation layers is Relu.
In another embodiment, the feature extraction module 1 comprises a CNN convolution block of a convolutional neural network, a transducer block based on a self-attention mechanism. For example, in the structure shown in fig. 5, the convolution block 1 is replaced by a transform block, where the transform block does not use a convolution layer, but uses a multi-head attention layer transform, and others are unchanged.
It should be understood that the number of network layers, the number of convolution blocks per layer, the number of batch normalization layers, convolution layers, activation layers, pooling layers, etc. included in each convolution block in the above embodiments may be adjusted according to the actual situation. In other words, the above specific implementation structure of the neural network of the feature extraction module is merely taken as an example, so as to facilitate understanding of the technical solution of the present invention. Those skilled in the art may set or adjust the specific implementation structure of the neural network as required within the scope of the present invention. The convolution processing of the convolution block 2 is configured to increase the number of characteristic channels, and the convolution processing of the convolution block 1 does not adjust the number of characteristic channels; alternatively, the convolution processing of the convolution block 1 is configured to adjust the number of characteristic channels from 20 to 30, and the convolution processing of the convolution block 2 is configured to adjust the number of characteristic channels from 30 to 40. Likewise, the activation function used by the activation layer may be set and adjusted as needed, such as a mich activation function. In addition, the size and the number of feature channels of the input, output images or feature images of the intermediate layer may also be set and adjusted as needed.
The feature extraction module 1 may also take some existing configuration according to one embodiment of the invention. For example, the Unet++, upper net, or the encoded portion (downsampled portion) of the fully convolutional neural network FCN without t-hop linking.
The segmentation prediction module 2 in fig. 2, which may also be referred to as a decoding module (Decoder), is a multi-layer neural network model for sampling the fundus characteristic map to analyze a corresponding segmentation prediction map of the fundus image. Preferably, the fundus prediction module performs up-sampling on the fundus characteristic map a plurality of times to a multi-channel segmentation map, and analyzes based on the multi-channel segmentation map to obtain a segmentation prediction map. Optionally, before up-sampling for the second time and later, the up-sampled feature images and the feature images with the same resolution output by the corresponding layers in the feature extraction module 1 are overlapped, the resolution of the fundus feature image obtained by the last down-sampling is increased by 2 times layer by layer, finally, a multi-channel segmentation image with the same resolution as the original input image, the channel number of which is equal to the total number (for example, 8) of categories corresponding to the background pixel category, the video disc pixel category, the multiple arc-shaped spot pixel categories and the multiple atrophy spot pixel categories is obtained, and the pixel category corresponding to each pixel position is obtained after the maximum value (Argmax) operation, and the pixel categories corresponding to all pixels form the segmentation prediction image.
The network structure of the feature extraction module 1 and the segmentation prediction module 2 is a U-shaped network structure, and one exemplary embodiment may refer to the network structure shown in fig. 6. The fundus image is 600×600×3 in size, after the processing (downsampling) of the 5-layer sub-network of the feature extraction module 1, the resolution is reduced to half of the input, the number of channels is gradually increased, so as to sequentially obtain a feature map 1 (300×300×20), a feature map 2 (150×150×40), a feature map 3 (75×75×80), a feature map 4 (38×38×160), a feature map 5 (19×19×320), and the feature map 5 is the fundus feature map obtained by the last downsampling. Then, after processing (up-sampling) of the 5-layer sub-network of the segmentation prediction module 2, each time a layer of processing is performed, the resolution of the fundus feature map obtained by the last down-sampling is increased by 2 times of input, the number of channels is gradually reduced, and feature map 6 (38×38×160), feature map 7 (75×75×80), feature map 8 (150×150×40), feature map 9 (300×300×20), feature map 10 (600×600×20) are sequentially obtained, and then feature map 11 (600×600×8) is obtained by 1×1 convolution; the feature map 11 is a multi-channel segmentation map. It can be seen that, at other layers than the first layer of the split prediction module 2, the feature map with the same resolution from the corresponding layer of the feature extraction module 1 is obtained through jump linking, and is superimposed with the output of the previous layer of the split prediction module 2, and then up-sampled.
The fundus predicting module 3 in fig. 2 is a neural network model for predicting fundus category tasks, and includes a fully-connected layer or a transducer layer, and predicts fundus category for fundus images according to fundus feature maps, and outputs one or more categories of normal fundus and various myopia-associated fundus. The fundus category predicted by the fundus prediction module 3 is a preliminary category of progression of the myopic fundus.
According to one embodiment of the invention, the plurality of myopia-associated fundus is a combination of classes in the arcuate fundus, diffuse atrophy fundus, patch atrophy fundus, macular region atrophy fundus. In addition, the leopard fundus is also a common performance of myopia fundus, and the leopard fundus can be additionally added to learn corresponding fundus semantic information, so that the system performance is improved. Preferably, the multiple myopic associated fundus is a combination of classes in the arc-shaped plaque fundus, leopard fundus, diffuse atrophy fundus, patch atrophy fundus, macular area atrophy fundus. For example, the various myopia-associated fundus are arcuate, leopard, diffuse atrophic, macular area atrophic. Alternatively, one or more categories are removed, such as leopard fundus, and various myopia-associated fundus is an arc-shaped plaque fundus, diffuse atrophy fundus, patch atrophy fundus, macular region atrophy fundus. It will be appreciated that the practitioner may add one or more corresponding categories in addition to the myopia-associated fundus category described above, as desired. Because the arc-shaped plaque fundus, leopard fundus, diffuse atrophy fundus, patch atrophy fundus and macular area atrophy fundus of the myopia fundus are possible to exist simultaneously, the activation function of the fundus prediction module 3 is a Sigmoid activation function suitable for a multi-label classification task, each type of label divides positive and negative through respective independent thresholds, and finally all prediction classifications are summarized. However, the normal fundus and the plural myopia-related fundus are mutually exclusive, and if the normal fundus is output, no category corresponding to any myopia-related fundus is output. According to the research of the data result during training by the applicant, because the pigment arc-shaped spots, the choroidal arc-shaped spots, the mixed arc-shaped spots and the sclera arc-shaped spots are similar based on the whole fundus, the fundus prediction module is trained to distinguish the fundus corresponding to the finely divided pigment arc-shaped spots, the choroidal arc-shaped spots, the mixed arc-shaped spots and the sclera arc-shaped spots, so that the diffuse atrophy fundus, the patch atrophy fundus and the macular area atrophy fundus can be distinguished obviously on the whole fundus. Therefore, in order to classify more accurately, the eyeground corresponding to various arc spots is unified into the arc spot eyeground, the influence on model parameters is reduced, diffuse atrophy, patch atrophy and macular area atrophy respectively correspond to one eyeground, and more abundant and efficient semantic information is provided.
The training of the feature extraction module 1, the segmentation prediction module 2, and the fundus prediction module 3 is described below. In one example, 3814 total fundus pictures were used, all from the real world, obtained by random sampling, covering various age groups and camera brand distribution and different degrees of myopia. Each picture is annotated by a professional ophthalmologist. The labeling content comprises:
fundus category label: normal fundus, arc-shaped plaque fundus, leopard fundus, diffuse atrophy fundus, patch atrophy fundus, macular atrophy fundus;
pixel class label (pixel level split label): background, optic disc, pigmented arc spot, choroidal arc spot, scleral arc spot, mixed arc spot, diffuse atrophy, and patch atrophy.
The training set and the test set can be divided according to the principle of 8:2. Training set data randomly extracts 3000 cases from 3814 cases of pictures, and the remaining 814 cases are used as test sets.
The total loss calculated at training is equal to the weighted sum of fundus classifier loss and fundus segmenter loss:
L all =α*L seg +β*L clf-1
wherein L is seg Indicating fundus segmentation loss, L clf-1 The fundus classifier loss is represented by α, the fundus segmentation loss is represented by α, and the fundus classifier loss is represented by β. Fundus segmentation sub-loss is typically either Dics loss or pixel level cross entropy, etc. Fundus classification sub-loss may be cross entropy classification or loss of arbitrary classification. It should be appreciated that α, β may be adjusted as appropriate. In this embodiment, the training system provides high-level semantic supervision information on the fundus type and eye type based on the whole image and pixel-level semantic supervision information based on each pixel of the image, so that the system can learn different aspects and nothing more fully Knowledge of the same level can further improve the performance of the model.
The fundus image analysis is mainly used for analyzing possible focuses of the whole fundus and analyzing categories to which each pixel of the fundus belongs, and can provide references for doctors or lens operators to know myopia conditions of the fundus of detected personnel and distribution conditions of various related arc-shaped spot pixel categories and various atrophy spot pixel categories in fundus images. However, since no fine-grained quantitative analysis is performed on the main focus type of the myopic fundus, the severity and prediction basis of the corresponding symptoms of the myopic fundus are not clear enough, and doctors or opticians cannot better give diagnosis and treatment or prevention suggestions to myopic patients. Thus, the above system can be further improved.
According to one embodiment of the present invention, referring to fig. 7, the fundus image analysis system further includes an eye classification module 4 and a quantization analysis module 5.
The eye classification module 4 is a neural network for predicting the eye class (eye class, i.e. left eye or right eye) task to which the fundus image belongs, and includes a full-connection layer or a transform layer, and predicts the eye class of the fundus image according to the fundus feature map, and outputs the left eye or the right eye. The classification of the eye condition is used for the quantitative analysis module 5 to divide the fundus into regions according to the center position of the optic disc. The left eye category and the right eye category cannot occur simultaneously, the module belongs to a multi-category classification task, a softmax activation function is adopted, and finally the category with the maximum probability (Argmax) is taken as output.
Since the added eye classification module 4 is also a neural network model, training is required. Referring to fig. 8, the inputs to the system include fundus images and three kinds of labels (fundus category label, eye category label, and pixel category label), and the model outputs are fundus category, eye category, and pixel category. At this time, the total loss calculated at the time of training is equal to the weighted sum of fundus classifier loss, fundus segmentation sub-loss, and ocular classification sub-loss:
L all =α*L seg +β*L clf-1 +γ*L clf-2
wherein L is all Representation ofTotal loss, L seg Indicating fundus segmentation loss, L clf-1 Represents the fundus classifier loss, α represents the weight of fundus segmentation loss, β represents the weight of fundus classifier loss, L clf-2 The eye classification sub-loss is represented, and γ represents the weight corresponding to the eye classification sub-loss. During training, a corresponding loss calculation module can be arranged in the system and used for calculating the loss. The eye classification sub-loss may be a cross entropy loss or a loss of any classification. Alpha, beta, gamma can be optionally regulated. For example, α, β, γ may be set to 0.2, 1, 0.4, respectively. The invention divides and integrates fundus classification, eye classification, pixel-level background, optic disc, various focus (pigment arc spot, choroid arc spot, sclera arc spot, mixed arc spot, diffuse atrophy and zebra-like atrophy) into a unified end-to-end training frame, fully plays the advantages of each model, and can improve the prediction precision of the system.
The quantitative analysis module 5 may give an evaluation of the comprehensiveness from the segmentation prediction map, fundus category, and eye level. The quantization analysis module 5 is used for performing post-processing analysis on the segmentation prediction graph output by the segmentation prediction module 2, the fundus category output by the fundus prediction module 3, and the eye category output by the eye category classification module 4, and the post-processing analysis comprises a quantity index, an area index, a grading index and a combination thereof. Aiming at myopia eyeground of different stages of early myopia, high myopia, pathological myopia and the like, the invention predicts by utilizing the segmentation prediction module, and after the random sampling analysis of the whole population selects a threshold value, the atrophic eyeground (corresponding to the high myopia, the pathological myopia) and the arc-shaped zebra eyeground (corresponding to the early myopia eyeground) are quantized and graded in finer granularity, so that more accurate disease analysis is carried out, and diagnosis and treatment advice is provided for myopic patients better.
If the classification result predicted by the fundus prediction module 3 is a normal fundus, the quantization analysis module 5 does not need to perform quantization analysis.
If the classification result predicted by the fundus prediction module 3 includes the type of atrophy patch (diffuse atrophy, patch atrophy, macular region atrophy), the module performs quantitative analysis on the classification categories (diffuse atrophy, patch atrophy), and the analysis content includes the total number, total area, maximum area, minimum area, ratio of the total area of the lesions to the total area of the optic disc, and quantitative classification of fine granularity. The area index is the number of pixels relative to the resolution of the original image, and the pixel area can be converted into the real physical area (actual area) according to the real physical diameters of eyeground corresponding to different camera brands.
For example, if the fundus prediction module 3 predicts a diffuse atrophy fundus, the lesion value of the type of lesion (diffuse atrophy) will be determined according to a predetermined one or more quantitative indicators (e.g., total number, total area, maximum area, minimum area, ratio of total area of diffuse atrophy to total area of optic disc, and combinations thereof). For example, the practitioner may select a ratio of the total area corresponding to diffuse atrophy to the total area of the optic disc as a lesion value that ranks diffuse atrophy lesions (it should be understood that a result of weighted summation of multiple quantization indices may also be used as a lesion value, as desired). For macular area atrophy, in fact, the macular area has the macular area with the macular atrophy, which belongs to the macular atrophy, and has higher severity, so that the invention sets a fundus category for the macular area atrophy independently, and if the macular area atrophy exists, the corresponding fundus category is output to remind a doctor to pay attention, so that diagnosis and treatment advice is provided for myopic patients more accurately. However, in the case of quantitative classification, when the macular atrophy fundus and the macular region atrophy fundus appear, the classification of the macular atrophy (the macular atrophy plaque) can be performed, and the severity of the macular atrophy can be distinguished with a finer granularity according to the size of the classification.
According to one embodiment of the invention, for fine-grained classification of the atrophic plaques, it is necessary to determine a classification threshold interval corresponding to the respective atrophic plaques, the classification threshold required for the classification threshold interval corresponding to the atrophic plaques being obtained in the following manner: randomly sampling part of samples from the collected sample sets containing various age groups, various regions and various myopia degrees; determining a sampling interval according to the granularity of the grading of the atrophic plaques and the number of all samples sampled; all samples are sequentially arranged according to the size of the quantization index for grading the atrophic plaques, and a plurality of grading thresholds for dividing grading threshold intervals corresponding to the atrophic plaques are determined by sampling at intervals according to the sampling intervals. For the fine-grain quantization classification of the segmentation class (diffuse atrophy, zebra-like atrophy), taking ten classification of the zebra-like atrophy class as an example, the following procedure was adopted:
randomly sampling 10-thousand images in the whole population distribution (for example, the data cover various age groups, various areas, different myopia degrees and the like), calculating the ratio of the total area of a focus in each image to the total area of a video disc (the two areas can be pixel areas or the actual area calculated based on the pixel areas, such as the ratio of the total pixel area of a patch-shaped atrophy focus to the total pixel area of the video disc), and sequencing from small to large;
Taking every 1 ten thousand cases as intervals, sequentially intercepting the ratio of the total area of the focus of the 9 patch-like atrophy to the total area of the optic disc, taking the ratio as a threshold for dividing ten levels (9 grading thresholds, and constructing grading threshold intervals corresponding to 10 atrophy spots), and taking the threshold as a fine granularity grading for determining the patch-like atrophy in a subsequent system reasoning prediction stage; for example, 9 classification thresholds are 0.05318,0.11904, 0.18408,0.258, 0.32726,0.40234, 0.50363,0.6608, 0.97272, respectively, corresponding, the classification threshold intervals corresponding to 10 atrophy spots are respectively 1-level corresponding interval (0,0.05318), 2-level corresponding interval [0.05318,0.11904), 3-level corresponding interval [0.11904,0.18408 ], 4-level corresponding interval [0.18408,0.258), 5-level corresponding interval [0.258,0.32726 ], 6-level corresponding interval [0.32726,0.40234 ], 7-level corresponding interval [0.40234,0.50363 ], 8-level corresponding interval [0.50363,0.6608), 9-level corresponding interval [0.6608,0.97272), 10-level corresponding interval [0.97272, ++ infinity a) is provided;
the random sample truncation threshold may be performed again as needed.
In the fundus of a myopic patient, as the myopia degree increases, the optic disc traction gradually increases, and generally, an arc spot appears in the temporal region of the optic disc, then gradually develops to the temporal upper and temporal lower regions, then develops to the nasal upper and nasal lower regions, finally develops to the nasal region, and even forms a circular arc spot. Namely, the severity of arc-shaped spots in different areas around the ocular fundus is from low to high: temporal region, temporal superior temporal inferior region, nasal superior nasal inferior region, nasal region. As myopia increases, optic disc traction becomes progressively more severe, with the first possible appearance of pigmented arc spots, then choroidal arc spots, mixed arc spots, and finally scleral arc spots. I.e. the severity of the arc-shaped spots is respectively from low to high: pigment arc spot, choroid arc spot, mixed arc spot, sclera arc spot.
In one embodiment, if the classification result of the fundus prediction module 3 includes an arc-shaped plaque class (at least one of a pigment arc-shaped plaque, a choroidal arc-shaped plaque, a scleral arc-shaped plaque, and a mixed arc-shaped plaque), the quantization analysis module 4 performs a more accurate quantization analysis on the relevant arc-shaped plaque class according to the predicted segmentation prediction map, for example, including the following analysis contents:
based on the center of the optic disc region shape in the segmentation prediction map, the fundus region is divided into the nasal side and the temporal side according to the classification results of the left and right eye types. If the eye is left, the left side of the center of the optic disc is the nasal side, and the right side of the center of the optic disc is the temporal side; if the eye is right, the right side of the center of the optic disc is the nasal side, and the left side is the temporal side. It should be understood that the practitioner may also make a vertical line through the center of the optic disc, with reference to the center of the optic disc, rotated through different angles about the center of the optic disc, to divide the fundus area into corresponding sub-areas. An exemplary division result is shown in fig. 9, in which the left eye is rotated clockwise and counterclockwise by 45 degrees, respectively, to divide the entire fundus into six regions of temporal side, temporal upper, temporal lower, nasal upper, nasal lower, and nasal side. It is also possible to rotate more angles, dividing the region of finer granularity, here illustrated at 45 degrees;
In one embodiment, the areas of the plurality of arc-shaped spots are weighted and summed according to the category of the arc-shaped spots and two dimensions of the sub-area where the arc-shaped spots are located, and the calculation formula is as follows:
wherein alpha represents the regional weight of the arc spotWeights (also called regional position weights) are six sub-regions of temporal side, temporal upper, temporal lower, nasal upper, nasal lower and nasal side, and the position weights are from small to large as follows according to arc-shaped plaque pathogenesis: temporal side<Temporo = temporo-inferior<Epistaxis = subnasally<Nasal side; beta represents the category weight of the arc-shaped spots, and the total of the arc-shaped spots comprises pigment arc-shaped spots, choroid arc-shaped spots, mixed arc-shaped spots and sclera arc-shaped spots. According to the pathogenesis of the arc-shaped spots, the category weight is from small to large: arc-shaped pigment spot<Choroidal arc spot<Mixed arc spot<Arc-shaped spots of sclera; s is S ij Representing the pixel area of a certain class of arc spot relative to the resolution of the original image within a certain area. Such as alpha i Take temporal area, beta j Taking arc spot of choroid, S ij A pixel area representing a choroidal arced spot of the temporal region; alternatively, α in the present invention 1 、α 2 、α 3 、α 4 、α 5 、α 6 Weights corresponding to the temporal, superior temporal, inferior temporal, superior nasal, inferior nasal, and lateral nasal regions, respectively (e.g., 1, 1.2, 1.5, 2, respectively); beta 1 、β 2 、β 3 、β 4 The weights of the mixed arc-shaped spots and the sclera arc-shaped spots are respectively corresponding to the pigment arc-shaped spots and the choroid arc-shaped spots (for example, the values are respectively 0.5, 1, 1.5 and 2);
calculating the ratio of the focus total area of the arc-shaped spots to the total area of the video disc based on the formula;
and carrying out fine granularity quantization grading on the arc-shaped spots. According to the invention, corresponding area weights and category weights are set for weighting and summing the total area of the arc-shaped spots according to the difference of the influence of the category characteristics and the area distribution characteristics of the arc-shaped spots on the ocular lesion degree, so that the grading and quantification result of the arc-shaped spots is more accurate.
In one embodiment, the ratio of the total focal area of the arc-shaped spots to the total area of the video disc is taken as a lesion value, and the grading threshold value required for the grading threshold interval corresponding to the arc-shaped spots is obtained in the following manner:
randomly sampling part of samples from the collected sample sets containing various age groups, various regions and various myopia degrees;
determining a sampling interval according to the granularity of the arc-shaped spot grading and the number of all sampled samples;
and sequentially arranging all sampled samples according to the ratio of the focus total area of the arc-shaped spots to the total area of the video disc, and sampling at intervals according to the sampling intervals to determine a plurality of arc-shaped spot thresholds for dividing grading threshold intervals corresponding to the arc-shaped spots.
For example, taking ten-stage as an example, the implementation is as follows:
randomly sampling 10-thousand images in the whole population distribution, calculating the ratio of the total weighted area of arc spots of all images to the total area of the video disc, and sequencing from small to large;
taking every 1 ten thousand cases as an interval, sequentially intercepting 9 ratios, and taking the ratio as a grading threshold for dividing ten grades;
in the system reasoning and predicting stage, the threshold value is used as the fine granularity grading for determining the arc-shaped spots;
the random sample truncation threshold may be performed again as needed.
With respect to fine-grained quantitative analysis of arc-shaped plaques, in one embodiment, the module ultimately outputs an index of the number, area, and fine-grained classification of the arc-shaped plaques. For example, the minimum area of the arc-shaped spot lesion, the maximum area of the lesion, the total area of the lesion, and the ratio of the total area of the lesion to the total area of the optic disc. Further, the total number, total pixel area, and maximum pixel area of each type of arc-shaped spots of each sub-region of the fundus can also be output.
Therefore, the system provided by the invention solves the problem that the classification of the traditional myopia fundus only has the rough category related to pathological myopia by utilizing a machine learning technology, predicts a plurality of focuses related to myopia by the segmentation prediction module, provides the number index, the area index and/or the grading index of the focuses by the quantitative analysis module so as to provide detail parameters for doctors or opticians, and is convenient for finding potential eye diseases, thereby providing diagnosis and treatment or eye care suggestions for myopic patients.
According to an embodiment of the present invention, the present invention provides a fundus image analysis method which can be executed by an electronic apparatus such as a computer or a server. The method analyzes fundus images by means of the fundus image analysis system comprising the neural network. The above system embodiments may be complementary to the method embodiments.
In order to verify the effect of the present invention, the applicant also carried out a corresponding experiment, the following experimental description:
1. data set description
The total of 3814 eye bottom images used in the invention are all from the real world, are obtained through random sampling, and cover the distribution of all ages and camera brands and different myopia degrees
The original fundus image is resolved to 600 x 600 and is a color map in the format of jpg, png, tif and the like containing three channels of RGB. Each picture is annotated by a professional ophthalmologist. The labeling content comprises:
fundus category label: normal fundus, arc-shaped plaque fundus, leopard fundus, diffuse atrophy fundus, patch atrophy fundus, macular atrophy fundus; a list of numbers with 0-5 format is entered. For example, [0] indicates normal fundus, and [1,2,3] indicates presence of ocular fundus with arc spot, leopard print, diffuse atrophy. Other classes of tags may be present in addition to the normal fundus.
Eye label: left and right eyes; the number list with 0 or 1 format is input, and the number list is only two possibilities, wherein the number list is [0] and the number list is [1] respectively represent the left eye and the right eye.
Pixel class label (pixel level split label): background, optic disc, pigmented arc-shaped spots, choroidal arc-shaped spots, scleral arc-shaped spots, mixed arc-shaped spots, diffuse atrophy, and patch-shaped atrophy; there are the following segmentation categories: background, optic disc, pigmented arc spot, choroidal arc spot, scleral arc spot, mixed arc spot, diffuse atrophy, and patch atrophy. A single channel array with 600 x 600 input format. Each pixel position may be an integer of 0 to 7, indicating the segmentation class to which the pixel belongs. Wherein 0, 1, 2, 3, 4, 5, 6, 7 respectively represent background, optic disc, pigmented arc spot, choroidal arc spot, scleral arc spot, mixed arc spot, diffuse atrophy, and patch atrophy.
The training set and the test set are roughly divided according to the 8:2 principle. Training set data randomly extracts 3000 cases from 3814 cases of pictures, and the remaining 814 cases are used as test sets.
2. Model architecture corresponding to system
In the experimental process, in the fundus image analysis system, the feature extraction module and the segmentation prediction module adopt the structure shown in fig. 6, the fundus prediction module adopts a fully-connected layer (fully-connected network), and the eye classification module adopts fully-connected layer (fully-connected network).
3. Abstract of training process
The model of the 97 th Epoch was finally used as the optimal model for identifying near-sighted fundus segmentation as shown in table 1 below. Wherein the partition loss refers to a loss L calculated by the partition prediction result seg The classification loss refers to L corresponding to fundus type prediction and eye type prediction clf-1 And L clf-2 Calculated weighted sum, total loss refers to L all
For classification predictions, the AUC (area under the sensitivity-specificity/ROC curve) for each class was evaluated, with the larger the value the better;
for segmentation prediction, each category Iou (the ratio of the intersection and union of the model predicted lesion area and the true labeled lesion area) is used for evaluation, and the larger the numerical value is, the better the numerical value is;
the model has minimal overall loss at the 97 th training round (Epoch), and the corresponding index for each segmentation class is also relatively optimal. Therefore, each module obtained by selecting the 97 th round of training constitutes a fundus image analysis system, and is deployed into a corresponding fundus image analysis apparatus.
TABLE 1
It should be noted that, although the steps are described above in a specific order, it is not meant to necessarily be performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order, as long as the required functions are achieved.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. A fundus image analysis system is characterized by comprising a feature extraction module, a fundus prediction module and a segmentation prediction module, wherein,
the feature extraction module samples fundus images to be analyzed to extract fundus feature images;
the fundus prediction module analyzes fundus categories corresponding to fundus images according to the fundus feature map, wherein the fundus categories comprise normal fundus and various myopia-associated fundus;
the segmentation prediction module samples the fundus characteristic map to analyze a segmentation prediction map corresponding to a fundus image, wherein the segmentation prediction map indicates the category of each pixel in the fundus image, and the category of the pixel comprises a background pixel category, a video disc pixel category, a plurality of arc-shaped spot pixel categories and a plurality of atrophy spot pixel categories;
the system is trained in the following manner:
acquiring training data, which comprises a plurality of fundus pictures, fundus category labels and pixel category labels;
and training a system by using training data, wherein fundus classifying sub-loss is calculated according to the output of the fundus prediction module and the fundus category label, fundus segmentation sub-loss is calculated according to the output of the segmentation prediction module and the pixel category label, total loss is calculated according to the fundus classifying sub-loss and the fundus segmentation sub-loss, and gradient calculation and parameter updating are performed on the feature extraction module, the fundus prediction module and the segmentation prediction module based on the total loss.
2. The system of claim 1, wherein the feature extraction module downsamples the fundus image a plurality of times to obtain the fundus feature map; the fundus prediction module performs multiple upsampling on the fundus feature map to a multi-channel segmentation map, and analyzes based on the multi-channel segmentation map to obtain a segmentation prediction map.
3. The system of claim 1 or 2, wherein the plurality of myopia-associated fundus is a combination of classes in arcuate fundus, diffuse atrophic fundus, macular regional atrophic fundus, leopard fundus; the multiple arc-shaped spot pixel categories are combinations of categories in pigment arc-shaped spots, choroid arc-shaped spots, mixed arc-shaped spots and sclera arc-shaped spots; the multiple patch pixel categories include diffuse atrophy and patch atrophy.
4. A fundus image analysis system is characterized by comprising a feature extraction module, a fundus prediction module, a segmentation prediction module and an eye classification module, wherein,
the feature extraction module performs downsampling on fundus images to be analyzed for a plurality of times to extract fundus feature images;
the fundus prediction module analyzes fundus categories corresponding to fundus images according to the fundus feature map, wherein the fundus categories comprise normal fundus and various myopia-associated fundus;
The segmentation prediction module performs up-sampling on the fundus characteristic map for a plurality of times to analyze a segmentation prediction map corresponding to a fundus image, wherein the segmentation prediction map indicates the category of each pixel in the fundus image, and the category of the pixel comprises a background pixel category, a video disc pixel category, a plurality of arc-shaped spot pixel categories and a plurality of atrophy spot pixel categories;
the eye classification module determines the eye class corresponding to the fundus image according to the fundus characteristic diagram, wherein the eye class is left eye or right eye;
the system is trained in the following manner:
acquiring training data, which comprises a plurality of fundus pictures, fundus category labels, eye category labels and pixel category labels;
and training a system by using training data, wherein fundus classifying sub-loss is calculated according to the output of the fundus prediction module and the fundus category label, fundus classifying sub-loss is calculated according to the output of the fundus classifying module and the fundus category label, fundus segmentation sub-loss is calculated according to the output of the segmentation prediction module and the pixel category label, total loss is calculated according to the fundus classifying sub-loss, the fundus classifying sub-loss and the fundus segmentation sub-loss, and gradient calculation and parameter updating are performed on the feature extraction module, the fundus prediction module, the segmentation prediction module and the fundus classifying module based on the total loss.
5. The system of claim 4, further comprising a quantization analysis module, wherein,
the quantitative analysis module performs quantitative analysis according to the fundus category and the segmentation prediction graph corresponding to the fundus image or performs quantitative analysis according to the fundus category, the eye category and the segmentation prediction graph corresponding to the fundus image to obtain various quantitative indexes.
6. The system of claim 5, wherein the total loss is calculated as follows:
L all =α*L seg +β*L clf-1 +γ*L clf-2
wherein L is all Indicating total loss, L seg Indicating fundus segmentation loss, L clf-1 Indicating fundus classified sub-loss, L clf-2 The eye classification sub-loss is represented by α, the fundus divided sub-loss is represented by β, the fundus classified sub-loss is represented by γ, and the eye classification sub-loss is represented by corresponding weight.
7. The system of claim 6, wherein the plurality of quantitative indicators includes a number indicator and an area indicator of a lesion, the quantitative analysis module to:
when the fundus category corresponding to the fundus image is any myopia-associated fundus, grading the degree of fundus lesions corresponding to the fundus image according to at least one quantitative index of the quantity index and the area index of the lesions, so as to obtain grading indexes.
8. The system of claim 7, wherein when the fundus category corresponding to the fundus image is any myopia-associated fundus, grading the degree of fundus lesions corresponding to the fundus image according to at least one quantitative index of a number index and an area index of the lesions, to obtain a grading index comprises:
determining a lesion value of the fundus lesion according to at least one quantitative index of a quantity index and an area index of the lesion, and determining a grade of the fundus lesion according to a grading threshold interval in which the lesion value is located, wherein a grading threshold for constructing the grading threshold interval is obtained in the following manner:
randomly sampling part of samples from the collected sample sets containing various age groups, various regions and various myopia degrees;
determining a sampling interval according to the hierarchical granularity and the number of all samples sampled;
and arranging the lesion values corresponding to all the sampled samples in order of magnitude, and sampling at intervals according to the sampling intervals to determine a plurality of grading thresholds for grading.
9. The system of claim 8, wherein the area indicator comprises: the minimum area of the focus, the maximum area of the focus, the total area of the focus, and the ratio of the total area of the focus to the total area of the optic disc.
10. The system of claim 9, wherein when the fundus category corresponding to the fundus image is an arc-shaped plaque fundus, the degree of the arc-shaped plaque lesion corresponding to the fundus image is classified according to a ratio of a total focal area of the arc-shaped plaque to a total optic disc area.
11. The system of claim 10, wherein the total focal area of the arc spot is a weighted area, wherein the temporal and nasal sides of the optic disc on the split prediction map are determined based on eye classification and the split prediction map is divided into a plurality of sub-regions based on temporal and nasal sides, and wherein the weighted areas are obtained by weighted summing the areas of the arc spots in the split prediction map based on the regional weights of the sub-regions and the class weights of the plurality of classes of arc spot pixels.
12. The system of claim 11, wherein the region weight of the sub-region relatively closer to the nasal side is greater than the region weight of the sub-region relatively farther from the nasal side.
13. The system of claim 11, wherein the plurality of arc spot pixel categories comprises: pigment arc-shaped spots, choroid arc-shaped spots, mixed arc-shaped spots and sclera arc-shaped spots, wherein the category weights of the pixel categories of the plurality of arc-shaped spots are the category weights corresponding to the pigment arc-shaped spots, the choroid arc-shaped spots, the mixed arc-shaped spots and the sclera arc-shaped spots from small to large in sequence.
14. The system of any one of claims 4 to 13, wherein the plurality of myopia-associated fundus is a combination of classes of arcuate fundus, diffuse atrophic fundus, patch atrophic fundus, macular area atrophic fundus, leopard fundus; the multiple arc-shaped spot pixel categories are combinations of categories in pigment arc-shaped spots, choroid arc-shaped spots, mixed arc-shaped spots and sclera arc-shaped spots; the multiple patch pixel categories include diffuse atrophy and patch atrophy.
15. A fundus image analysis method based on the system of any of claims 1 to 14, the method comprising:
acquiring a fundus image to be analyzed;
sampling the fundus image by a feature extraction module to extract a fundus feature map;
analyzing fundus categories corresponding to fundus images by a fundus prediction module according to the fundus feature map;
sampling, by a segmentation prediction module, the fundus feature map to analyze a segmentation prediction map corresponding to a fundus image that indicates a class of each pixel in the fundus image;
outputting the fundus category and the segmentation prediction map obtained through analysis.
16. The fundus image analysis method of claim 15, the system further comprising an eye classification module and a quantitative analysis module, the method further comprising:
Analyzing the eye condition corresponding to the fundus image by an eye condition classification module according to the fundus characteristic diagram, wherein the eye condition is a left eye or a right eye;
the quantitative analysis module performs quantitative analysis according to the fundus category and the segmentation prediction graph corresponding to the fundus image or performs quantitative analysis according to the fundus category, the eye category and the segmentation prediction graph corresponding to the fundus image;
outputting various quantization indexes.
17. A computer readable storage medium having embodied thereon a computer program executable by a processor to perform the steps of the method of claim 15 or 16.
18. An electronic device, comprising:
one or more processors; and
a memory, wherein the memory is for storing executable instructions;
the one or more processors are configured to execute the executable instructions to implement the method of claim 15 or 16.
CN202111059503.2A 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment Active CN113768460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111059503.2A CN113768460B (en) 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111059503.2A CN113768460B (en) 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113768460A CN113768460A (en) 2021-12-10
CN113768460B true CN113768460B (en) 2023-11-14

Family

ID=78842460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111059503.2A Active CN113768460B (en) 2021-09-10 2021-09-10 Fundus image analysis system, fundus image analysis method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113768460B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612656A (en) * 2022-01-12 2022-06-10 山东师范大学 MRI image segmentation method and system based on improved ResU-Net neural network
CN114887232B (en) * 2022-07-15 2023-04-11 北京鹰瞳科技发展股份有限公司 Device for controlling red light irradiation of eye ground and myopia physiotherapy equipment
CN114937307B (en) * 2022-07-19 2023-04-18 北京鹰瞳科技发展股份有限公司 Method for myopia prediction and related products
CN115424084B (en) * 2022-11-07 2023-03-24 浙江省人民医院 Fundus photo classification method and device based on class weighting network
CN116503405B (en) * 2023-06-28 2023-10-13 依未科技(北京)有限公司 Myopia fundus change visualization method and device, storage medium and electronic equipment
CN117132777B (en) * 2023-10-26 2024-03-22 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014031086A1 (en) * 2012-08-24 2014-02-27 Agency For Science, Technology And Research Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN105310645A (en) * 2014-06-18 2016-02-10 佳能株式会社 Image processing apparatus and image processing method
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
CN107680684A (en) * 2017-10-12 2018-02-09 百度在线网络技术(北京)有限公司 For obtaining the method and device of information
CN108665447A (en) * 2018-04-20 2018-10-16 浙江大学 A kind of glaucoma image detecting method based on eye-ground photography deep learning
CN109800789A (en) * 2018-12-18 2019-05-24 中国科学院深圳先进技术研究院 Diabetic retinopathy classification method and device based on figure network
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same
CN110163839A (en) * 2019-04-02 2019-08-23 上海鹰瞳医疗科技有限公司 The recognition methods of leopard line shape eye fundus image, model training method and equipment
US10413180B1 (en) * 2013-04-22 2019-09-17 VisionQuest Biomedical, LLC System and methods for automatic processing of digital retinal images in conjunction with an imaging device
CN110236483A (en) * 2019-06-17 2019-09-17 杭州电子科技大学 A method of the diabetic retinopathy detection based on depth residual error network
CN110276356A (en) * 2019-06-18 2019-09-24 南京邮电大学 Eye fundus image aneurysms recognition methods based on R-CNN
CN110400289A (en) * 2019-06-26 2019-11-01 平安科技(深圳)有限公司 Eye fundus image recognition methods, device, equipment and storage medium
CN110555845A (en) * 2019-09-27 2019-12-10 上海鹰瞳医疗科技有限公司 Fundus OCT image identification method and equipment
CN110570421A (en) * 2019-09-18 2019-12-13 上海鹰瞳医疗科技有限公司 multitask fundus image classification method and apparatus
CN111046835A (en) * 2019-12-24 2020-04-21 杭州求是创新健康科技有限公司 Eyeground illumination multiple disease detection system based on regional feature set neural network
CN111144296A (en) * 2019-12-26 2020-05-12 湖南大学 Retina fundus picture classification method based on improved CNN model
CN112446875A (en) * 2020-12-11 2021-03-05 南京泰明生物科技有限公司 AMD grading system based on macular attention mechanism and uncertainty
CN112545452A (en) * 2020-12-07 2021-03-26 南京医科大学眼科医院 High myopia fundus lesion risk prediction method
CN113011450A (en) * 2019-12-04 2021-06-22 深圳硅基智能科技有限公司 Training method, training device, recognition method and recognition system for glaucoma recognition
CN113066066A (en) * 2021-03-30 2021-07-02 北京鹰瞳科技发展股份有限公司 Retinal abnormality analysis method and device
CN113177981A (en) * 2021-04-29 2021-07-27 中国科学院自动化研究所 Double-channel craniopharyngioma invasiveness classification and focus region segmentation system thereof
CN113222927A (en) * 2021-04-30 2021-08-06 汕头大学·香港中文大学联合汕头国际眼科中心 Automatic examination method for retinopathy additive lesion of premature infant

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10163241B2 (en) * 2016-12-09 2018-12-25 Microsoft Technology Licensing, Llc Automatic generation of fundus drawings
BR112019022447A2 (en) * 2017-04-27 2020-06-09 Bober Miroslaw system and method for automated funduscopic image analysis
US10963737B2 (en) * 2017-08-01 2021-03-30 Retina-Al Health, Inc. Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014031086A1 (en) * 2012-08-24 2014-02-27 Agency For Science, Technology And Research Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
US10413180B1 (en) * 2013-04-22 2019-09-17 VisionQuest Biomedical, LLC System and methods for automatic processing of digital retinal images in conjunction with an imaging device
CN105310645A (en) * 2014-06-18 2016-02-10 佳能株式会社 Image processing apparatus and image processing method
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
CN107680684A (en) * 2017-10-12 2018-02-09 百度在线网络技术(北京)有限公司 For obtaining the method and device of information
CN108665447A (en) * 2018-04-20 2018-10-16 浙江大学 A kind of glaucoma image detecting method based on eye-ground photography deep learning
KR101953752B1 (en) * 2018-05-31 2019-06-17 주식회사 뷰노 Method for classifying and localizing images using deep neural network and apparatus using the same
CN109800789A (en) * 2018-12-18 2019-05-24 中国科学院深圳先进技术研究院 Diabetic retinopathy classification method and device based on figure network
CN110163839A (en) * 2019-04-02 2019-08-23 上海鹰瞳医疗科技有限公司 The recognition methods of leopard line shape eye fundus image, model training method and equipment
CN110236483A (en) * 2019-06-17 2019-09-17 杭州电子科技大学 A method of the diabetic retinopathy detection based on depth residual error network
CN110276356A (en) * 2019-06-18 2019-09-24 南京邮电大学 Eye fundus image aneurysms recognition methods based on R-CNN
CN110400289A (en) * 2019-06-26 2019-11-01 平安科技(深圳)有限公司 Eye fundus image recognition methods, device, equipment and storage medium
CN110570421A (en) * 2019-09-18 2019-12-13 上海鹰瞳医疗科技有限公司 multitask fundus image classification method and apparatus
CN110555845A (en) * 2019-09-27 2019-12-10 上海鹰瞳医疗科技有限公司 Fundus OCT image identification method and equipment
CN113011450A (en) * 2019-12-04 2021-06-22 深圳硅基智能科技有限公司 Training method, training device, recognition method and recognition system for glaucoma recognition
CN111046835A (en) * 2019-12-24 2020-04-21 杭州求是创新健康科技有限公司 Eyeground illumination multiple disease detection system based on regional feature set neural network
CN111144296A (en) * 2019-12-26 2020-05-12 湖南大学 Retina fundus picture classification method based on improved CNN model
CN112545452A (en) * 2020-12-07 2021-03-26 南京医科大学眼科医院 High myopia fundus lesion risk prediction method
CN112446875A (en) * 2020-12-11 2021-03-05 南京泰明生物科技有限公司 AMD grading system based on macular attention mechanism and uncertainty
CN113066066A (en) * 2021-03-30 2021-07-02 北京鹰瞳科技发展股份有限公司 Retinal abnormality analysis method and device
CN113177981A (en) * 2021-04-29 2021-07-27 中国科学院自动化研究所 Double-channel craniopharyngioma invasiveness classification and focus region segmentation system thereof
CN113222927A (en) * 2021-04-30 2021-08-06 汕头大学·香港中文大学联合汕头国际眼科中心 Automatic examination method for retinopathy additive lesion of premature infant

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Hybrid UNet Architecture based on Residual Learning of Fundus Images for Retinal Vessel Segmentation;Nagdeote, S.等;《Journal of Physics: Conference Series》;第2070卷(第1期);012104 *
卷积神经网络在眼科医学图像中的应用研究:分类、分割及回归分析;容毅标;《中国博士学位论文全文数据库 (医药卫生科技辑)》(第2021年第06期);E076-3 *
基于眼底彩照的病理性近视分类和脉络膜视网膜萎缩分割;陆如意;《中国优秀硕士学位论文全文数据库(医药卫生科技辑)》;15-37 *
基于眼底彩照的近视性黄斑病变自动分级和病灶识别系统研究;汤加;《中国博士学位论文全文数据库(医药卫生科技辑)》;9-27 *
方严,石一宁.《病理性近视眼眼底改变》.北京:科学技术文献出版社,(第2013年3月第1版版),42-54. *

Also Published As

Publication number Publication date
CN113768460A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN113768460B (en) Fundus image analysis system, fundus image analysis method and electronic equipment
Liao et al. Clinical interpretable deep learning model for glaucoma diagnosis
Veena et al. A novel optic disc and optic cup segmentation technique to diagnose glaucoma using deep learning convolutional neural network over retinal fundus images
Cao et al. Hierarchical method for cataract grading based on retinal images using improved Haar wavelet
Zhang et al. A survey on computer aided diagnosis for ocular diseases
Sarhan et al. Machine learning techniques for ophthalmic data processing: a review
Yang et al. Efficacy for differentiating nonglaucomatous versus glaucomatous optic neuropathy using deep learning systems
Muramatsu et al. Automated detection and classification of major retinal vessels for determination of diameter ratio of arteries and veins
Kumar et al. Deep learning based analysis of ophthalmology: A systematic review
CN112890815A (en) Autism auxiliary evaluation system and method based on deep learning
CN109215039B (en) Method for processing fundus picture based on neural network
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
Tobin Jr et al. Characterization of the optic disc in retinal imagery using a probabilistic approach
Brancati et al. Segmentation of pigment signs in fundus images for retinitis pigmentosa analysis by using deep learning
CN113576399B (en) Sugar net analysis method, system and electronic equipment
Raman et al. The effects of spatial resolution on an automated diabetic retinopathy screening system's performance in detecting microaneurysms for diabetic retinopathy
Noronha et al. Automated diagnosis of diabetes maculopathy: a survey
Han et al. An automated framework for screening of glaucoma using cup-to-disc ratio and ISNT rule with a support vector machine
Aykat et al. Deep Learning in Retinal Diseases Diagnosis: A Review
Sharma et al. A system for grading diabetic maculopathy severity level
KR102669542B1 (en) The method and system for cataract diagnosis using deep learning
Duan et al. Classification of Diabetic Retinopathy via Vascular Removal
Abbood et al. Automatic classification of diabetic retinopathy through segmentation using cnn
TWI821063B (en) Establishing method of retinal layer autosegmentation model, retinal layer quantitative system, eye care device, method for detecting retinal layer thickness and retinal layer area, and method for assessing and predicting neurodegenerative disease
Tang et al. Diabetic Retinopathy Analysis Based on Retinal Fundus Photographs via Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240709

Address after: 200233 room 01, 8th floor, building 1, No. 180, Yizhou Road, Xuhui District, Shanghai

Patentee after: SHANGHAI EAGLEVISION MEDICAL TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 21, floor 4, building 2, yard a 2, North West Third Ring Road, Haidian District, Beijing 100083

Patentee before: Beijing Yingtong Technology Development Co.,Ltd.

Country or region before: China

Patentee before: SHANGHAI EAGLEVISION MEDICAL TECHNOLOGY Co.,Ltd.