CN111583261A - Fundus super-wide-angle image analysis method and terminal - Google Patents

Fundus super-wide-angle image analysis method and terminal Download PDF

Info

Publication number
CN111583261A
CN111583261A CN202010565477.XA CN202010565477A CN111583261A CN 111583261 A CN111583261 A CN 111583261A CN 202010565477 A CN202010565477 A CN 202010565477A CN 111583261 A CN111583261 A CN 111583261A
Authority
CN
China
Prior art keywords
fundus
image
wide
super
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010565477.XA
Other languages
Chinese (zh)
Other versions
CN111583261B (en
Inventor
林晨
喻碧莺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lin Chen
Wisdom Medical Shenzhen Co ltd
Original Assignee
Ke Junlong
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ke Junlong filed Critical Ke Junlong
Priority to CN202010565477.XA priority Critical patent/CN111583261B/en
Publication of CN111583261A publication Critical patent/CN111583261A/en
Application granted granted Critical
Publication of CN111583261B publication Critical patent/CN111583261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a fundus super-wide-angle image analysis method and a terminal, wherein fundus super-wide-angle images are cut according to different fundus tissues to obtain area images corresponding to each fundus tissue; respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue; integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result; according to the method, the fundus super-wide-angle image is divided and analyzed, the analysis amount of each time is reduced, a plurality of divided images can be correspondingly analyzed at the same time, the analysis influence speed is improved, finally, the analysis results of the divided regional images are integrated and analyzed to obtain the final analysis result, the phenomenon that the whole information is lost after the divided images are analyzed is avoided, the accurate analysis result can be obtained, and a better reference is provided for a doctor.

Description

Fundus super-wide-angle image analysis method and terminal
Technical Field
The invention relates to the field of image analysis, in particular to a fundus super-wide-angle image analysis method and a fundus super-wide-angle image analysis terminal.
Background
In the field of fundus image analysis, the prior art generally puts the whole fundus image into an AI model for analysis, outputs the disease probability of each ophthalmic disease corresponding to the fundus image, but the image analysis has poor pertinence, wastes computing resources, and is easy to cause program errors for some pictures with poor quality and excessive interference factors due to non-normative acquisition, which is particularly obvious in the analysis of fundus super-wide-angle images; in addition, the black box effect of the existing method is obvious, only the disease probability of a part of specified diseases is given, the decision process of the disease probability can not be explained by the corresponding inference, the physical sign and the phenomenon are not described, the report can not be displayed according to the clinical reading habit of a doctor, and the method has great limitation in the actual clinical application of auxiliary diagnosis.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: provides a fundus super-wide-angle image analysis method, which improves the analysis speed of the image.
In order to solve the technical problems, the invention adopts a technical scheme that:
a fundus ultra-wide angle image analysis method comprises the following steps:
s1, cutting the fundus super-wide-angle image according to different fundus tissues to obtain a region image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue;
and S3, integrating the area analysis result of each fundus tissue, and obtaining and outputting a final analysis result.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a fundus super-wide-angle image analysis terminal comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the following steps:
s1, cutting the fundus super-wide-angle image according to different fundus tissues to obtain a region image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue;
and S3, integrating the area analysis result of each fundus tissue, and obtaining and outputting a final analysis result.
The invention has the beneficial effects that: the fundus super-wide-angle image is cut according to different fundus tissues, more than one cut image is analyzed respectively, the specific fundus tissue condition, namely the regional analysis result can be obtained, and the different fundus tissue conditions can be integrated to obtain the final analysis result; the image is cut and then analyzed, compared with the method for analyzing the whole image, the size of the input image is reduced, a plurality of cut images can be analyzed simultaneously, and the analysis speed of the image is improved; meanwhile, the specific conditions of different tissues of the eyeground can be acquired to be integrated and analyzed as required, the specific conditions of the eyeground tissues and the analysis results of the combination of the conditions of the eyeground tissues can be obtained, and the reference with higher clinical practicability is provided.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for analyzing a super-wide-angle fundus image according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a fundus super-wide-angle image analysis terminal according to an embodiment of the present invention;
description of reference numerals:
1. an eye fundus super wide-angle image analysis terminal; 2. a processor; 3. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, a method for analyzing a super-wide-angle image of a fundus includes the steps of:
s1, cutting the fundus super-wide-angle image according to different fundus tissues to obtain a region image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue;
and S3, integrating the area analysis result of each fundus tissue, and obtaining and outputting a final analysis result.
From the above description, the beneficial effects of the present invention are: the fundus super-wide-angle image is cut according to different fundus tissues, more than one cut image is analyzed respectively, the specific fundus tissue condition, namely the regional analysis result can be obtained, and the different fundus tissue conditions can be integrated to obtain the final analysis result; the image is cut and then analyzed, compared with the method for analyzing the whole image, the size of the input image is reduced, a plurality of cut images can be analyzed simultaneously, and the analysis speed of the image is improved; meanwhile, the specific conditions of different tissues of the eyeground can be acquired to be integrated and analyzed as required, the specific conditions of the eyeground tissues and the analysis results of the combination of the conditions of the eyeground tissues can be obtained, and the reference with higher clinical practicability is provided.
Further, before S1, the method further includes:
training an AI model to automatically mark different fundus tissues;
the S1 specifically includes:
s11, inputting the fundus super-wide-angle image into the AI model to obtain fundus super-wide-angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, if so, returning to S11; otherwise, cutting the fundus super-wide-angle image according to the mark.
According to the description, the AI model is trained, different fundus tissues are automatically marked by the AI model, the marking result can be judged according to the preset threshold value, if the corresponding condition is not met, the marking can be carried out again, the marking accuracy of the AI model is ensured, the efficiency of marking the picture is improved, the cutting is carried out according to the marked image, and the completeness of the fundus tissues in the cut image can be ensured.
Further, the fundus tissue includes the optic disc, macula lutea, and the middle and peripheral portions of the retina;
the obtaining of the area image corresponding to each fundus tissue in S1 includes:
obtaining a first area image including a optic disc, a second area image including a macula lutea, and a third area image including a middle periphery and a peripheral portion of a retina;
classifying the first area image, the second area image and the third area image according to lesions by using an image classification model to obtain a lesion classification result of each area image;
performing semantic segmentation on the first area image, the second area image and the third area image respectively by using an image semantic segmentation model to obtain the shape of the fundus tissue in each area image;
and obtaining the region analysis result of each region image according to the lesion classification result and the shape of the fundus tissue.
According to the description, the integrality of the optic disc, the macula lutea and the retina middle periphery and the periphery in the cut images is guaranteed, the subsequent analysis of the images is facilitated, the images are classified through the image classification model and segmented through the semantic segmentation model, the pathological change probability can be obtained, meanwhile, the shape of the fundus tissue can be obtained, the subsequent formation of a report with high readability is facilitated, and a more detailed reference is provided for the diagnosis of a doctor.
Further, the S3 specifically includes:
and inputting the lesion classification result and the shape of the fundus tissue of each region image into a preset logic model to obtain and output a final analysis result.
From the above description, the lesion classification result and the fundus tissue shape of each region image are input into the preset logical model, the analysis results of the plurality of segmented images can be combined and analyzed, the local analysis results can be integrated to obtain the overall analysis result while the local analysis is considered to be accurate, the detailed analysis process can be seen in the logical model, and the clear analysis process can provide data with more reference for doctors.
Further, the step S12 is specifically;
judging whether the position of the mark of the macula lutea is on the left side or the right side of the level of the mark of the optic disc, if not, returning to the step S11;
determining whether the distance between the position of the macular marker and the position of the optic disc marker exceeds ten times the diameter of the optic disc marker, and if so, returning to S11;
judging whether the mark of the middle circumference part of the retina is not the mark of the equator part, if not, returning to the step S11
And if the position of the macular marking is on the left side or the right side of the level of the marking position of the optic disc, the distance between the position of the macular marking and the position of the marking of the optic disc is less than ten times the diameter of the marking of the optic disc, and the mark of the middle periphery of the retina is outside the mark of the equator part, cutting the fundus super-wide-angle image according to the mark.
According to the description, the specific standard judgment AI model is set to verify the judgment result of the fundus super-wide-angle image, and if the judgment result does not meet the threshold value, the marking is carried out again, so that the marked result with obvious errors can be eliminated, and the accuracy of the marked result is improved.
Referring to fig. 2, a fundus super-wide-angle image analysis terminal includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the following steps when executing the computer program:
s1, cutting the fundus super-wide-angle image according to different fundus tissues to obtain a region image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue;
and S3, integrating the area analysis result of each fundus tissue, and obtaining and outputting a final analysis result.
The invention has the beneficial effects that: the fundus super-wide-angle image is cut according to different fundus tissues, more than one cut image is analyzed respectively, the specific fundus tissue condition, namely the regional analysis result can be obtained, and the different fundus tissue conditions can be integrated to obtain the final analysis result; the image is cut and then analyzed, compared with the method for analyzing the whole image, the size of the input image is reduced, a plurality of cut images can be analyzed simultaneously, and the analysis speed of the image is improved; meanwhile, the specific conditions of different tissues of the eyeground can be acquired to be integrated and analyzed as required, the specific conditions of the eyeground tissues and the analysis results of the combination of the conditions of the eyeground tissues can be obtained, and the reference with higher clinical practicability is provided.
Further, before S1, the method further includes:
training an AI model to automatically mark different fundus tissues;
the S1 specifically includes:
s11, inputting the fundus super-wide-angle image into the AI model to obtain fundus super-wide-angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, if so, returning to S11; otherwise, cutting the fundus super-wide-angle image according to the mark.
According to the description, the AI model is trained, different fundus tissues are automatically marked by the AI model, the marking result can be judged according to the preset threshold value, if the corresponding condition is not met, the marking can be carried out again, the marking accuracy of the AI model is ensured, the efficiency of marking the picture is improved, the cutting is carried out according to the marked image, and the completeness of the fundus tissues in the cut image can be ensured.
Further, the fundus tissue includes the optic disc, macula lutea, and the middle and peripheral portions of the retina;
the obtaining of the area image corresponding to each fundus tissue in S1 includes:
obtaining a first area image including a optic disc, a second area image including a macula lutea, and a third area image including a middle periphery and a peripheral portion of a retina;
classifying the first area image, the second area image and the third area image according to lesions by using an image classification model to obtain a lesion classification result of each area image;
performing semantic segmentation on the first area image, the second area image and the third area image respectively by using an image semantic segmentation model to obtain the shape of the fundus tissue in each area image;
and obtaining the region analysis result of each region image according to the lesion classification result and the shape of the fundus tissue.
According to the description, the integrality of the optic disc, the macula lutea and the retina middle periphery and the periphery in the cut images is guaranteed, the subsequent analysis of the images is facilitated, the images are classified through the image classification model and segmented through the semantic segmentation model, the pathological change probability can be obtained, meanwhile, the shape of the fundus tissue can be obtained, the subsequent formation of a report with high readability is facilitated, and a more detailed reference is provided for the diagnosis of a doctor.
Further, the S3 specifically includes:
and inputting the lesion classification result and the shape of the fundus tissue of each region image into a preset logic model to obtain and output a final analysis result.
From the above description, the lesion classification result and the fundus tissue shape of each region image are input into the preset logical model, the analysis results of the plurality of segmented images can be combined and analyzed, the local analysis results can be integrated to obtain the overall analysis result while the local analysis is considered to be accurate, the detailed analysis process can be seen in the logical model, and the clear analysis process can provide data with more reference for doctors.
Further, the step S12 is specifically;
judging whether the position of the mark of the macula lutea is on the left side or the right side of the level of the mark of the optic disc, if not, returning to the step S11;
determining whether the distance between the position of the macular marker and the position of the optic disc marker exceeds ten times the diameter of the optic disc marker, and if so, returning to S11;
judging whether the mark of the middle circumference part of the retina is not the mark of the equator part, if not, returning to the step S11
And if the position of the macular marking is on the left side or the right side of the level of the marking position of the optic disc, the distance between the position of the macular marking and the position of the marking of the optic disc is less than ten times the diameter of the marking of the optic disc, and the mark of the middle periphery of the retina is outside the mark of the equator part, cutting the fundus super-wide-angle image according to the mark.
According to the description, the specific standard judgment AI model is set to verify the judgment result of the fundus super-wide-angle image, and if the judgment result does not meet the threshold value, the marking is carried out again, so that the marked result with obvious errors can be eliminated, and the correctness of the marked result is improved.
Referring to fig. 1, a first embodiment of the present invention is:
a fundus ultra-wide angle image analysis method specifically comprises the following steps:
training an AI model to automatically mark different fundus tissues;
in an optional implementation mode, boundary tracing is carried out on areas such as eyelash shadows, eyelids and instrument borders in a large number of fundus super-wide-angle images through marking software (such as LableME, and also independently developed marking software), a generated mask image is input into an AI model (which can be FCN (fiber channel Network, full Convolutional neural Network; U-NET; U-NET + +, and other image semantic segmentation models) for training, the trained model can automatically mark areas such as the eyelash shadows, the eyelids and the instrument borders, and the marked instrument borders can be directly cut;
in an optional implementation mode, the optic disc, the macula lutea and the vortex veins are marked by marking software, images generated by marking are input into an AI model (image segmentation models such as FCN, U-NET and the like) for training, the marking range of other areas is set by taking the position of the optic disc, the macula lutea or the vortex veins as a reference, and different fundus tissues are marked by the AI model according to the training result and the set position of the other areas relative to the optic disc, the macula lutea or the vortex veins;
for example, according to the marked positions of the macula lutea and the optic disc, determining the distance between the macula lutea and the optic disc, taking the macula lutea as the center and taking two times to four times of the distance as the radius to make a circle, wherein the circle is the posterior pole part; determining an equatorial part according to the position of the vortex vein, wherein the periphery of the retina is outside the equatorial part;
specifically, the marking range of the macula lutea is determined by the fovea maculata;
s1, cutting the fundus super-wide-angle image according to different fundus tissues to obtain a region image corresponding to each fundus tissue;
wherein the fundus tissue includes the optic disc, macula lutea, and the mid-peripheral and peripheral portions of the retina;
the S1 specifically includes:
s11, inputting the fundus super-wide-angle image into the AI model to obtain fundus super-wide-angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, if so, returning to S11; otherwise, cutting the fundus super-wide-angle image according to the mark;
specifically, the method comprises the following steps of;
judging whether the position of the mark of the macula lutea is on the left side or the right side of the level of the mark of the optic disc, if not, returning to the step S11;
determining whether the distance between the position of the macular marker and the position of the optic disc marker exceeds ten times the diameter of the optic disc marker, and if so, returning to S11;
judging whether the mark of the middle circumference part of the retina is not the mark of the equator part, if not, returning to the step S11
If the position of the macular marking is on the left side or the right side of the level of the disc marking position, the distance between the position of the macular marking and the disc marking position is less than ten times the diameter of the disc marking, and the retina middle periphery marking is outside the equatorial marking, cutting the fundus super-wide-angle image according to the marking;
s2, respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue;
and S3, integrating the area analysis result of each fundus tissue, and obtaining and outputting a final analysis result.
The second embodiment of the invention is as follows:
a method for analyzing a super-wide-angle fundus image, which is different from the first embodiment in that:
the obtaining of the area image corresponding to each fundus tissue in S1 includes:
obtaining a first area image including a optic disc, a second area image including a macula lutea, and a third area image including a middle periphery and a peripheral portion of a retina;
classifying the first area image, the second area image and the third area image according to lesions by using an image classification model (such as CNN (Convolutional Neural Networks)) to obtain a lesion classification result of each area image;
in an optional implementation mode, the lesion condition and the lesion classification result of each regional image are obtained by using an image classification model;
specifically, the image classification model is obtained through training, the pathological changes of different fundus tissues in a large number of fundus super-wide-angle images are marked, the pathological changes comprise normal, exudation, bleeding, atrophy and degeneration, pathological changes corresponding to the pathological changes of the different fundus tissues are marked, the marked result is input into a CNN model for training, and the trained CNN model can classify the pathological changes or mark the pathological changes according to the input images;
the image classification model can also be efficient NET, YOLO and the like;
if a glaucoma fundus image is marked as: pathological conditions, optic disc atrophy; lesions, glaucoma;
performing semantic segmentation on the first area image, the second area image and the third area image respectively by using an image semantic segmentation model to obtain the shape of the fundus tissue in each area image;
obtaining a region analysis result of each region image according to the lesion classification result and the shape of the fundus tissue;
specifically, for the first region image, determining the pathological change condition of the optic disc, namely normal, exudation, bleeding, atrophy or degeneration, through an image classification model, and obtaining the range of the optic cup and the optic disc through a semantic segmentation model; determining the pathological change condition of the macula lutea of the second region image through an image classification model, and acquiring the range of the macula lutea through a semantic segmentation model; determining whether the periphery of the retina has degeneration or not by an image classification model for the third area image;
the S3 specifically includes:
inputting the lesion classification result and the fundus tissue shape of each region image into a preset logic model to obtain and output a final analysis result;
specifically, lesion classification results of a plurality of fundus tissues and the shape of the fundus tissues are combined and analyzed, or a whole image is classified, the combined and analyzed results and the whole image lesion classification results are input into a logic model (such as a logistic model, a mixed linear model and the like) to be judged, a decision tree result is formed, and finally a conclusion is generated;
specifically, for the first area image, a cup-to-disc ratio is calculated according to the range of the optic cup and the optic disc, and if the cup-to-disc ratio exceeds a threshold value and the classification result of the image classification model is optic disc atrophy, the probability that the first area image analysis result is glaucoma is high; if the cup-disc ratio does not exceed the threshold value, but the classification result of the image classification model is optic disc atrophy, the first area image analysis result indicates that the possibility of glaucoma exists, and attention is paid to manual verification of the condition of the cup-disc; if the cup-to-disc ratio exceeds the threshold value and the classification result of the image classification model is that the optic disc is normal, the first area image analysis result is that the optic disc is normal;
in an alternative embodiment, the fundus super-wide-angle image is segmented to obtain a fourth area image including a posterior pole part, and the pathological condition and the pathological type of the posterior pole part are obtained through the image classification model, wherein the pathological condition is bleeding and exudation, the pathological type is diabetic retinopathy and the probability is 0.9, the pathological condition of the macula is obtained simultaneously, the bleeding and exudation are obtained, the pathological type is macular pathological and the probability is 0.94; because the two diseases rarely occur simultaneously in general, according to a preset logic model, in diabetic retinopathy, the weight of the lesion in the macular region is 0.3, and the weight of the lesion in the posterior pole part is 0.7, then 0.94 × 0.3+0.9 × 0.7 is calculated to be 0.912, and at this time, the final judgment result is diabetic retinopathy; if the lesion type of the posterior pole part is diabetic retinopathy and the probability is 0.1, calculating that 0.94 × 0.3+0.1 × 0.7 is 0.352, and finally judging that the result is macular degeneration;
specifically, a corresponding threshold may be set to determine a final determination result, the threshold may be set in a range of 0.5 to 0.8 according to the model calculation result, if the threshold is exceeded, the threshold is regarded as the final determination result, the diabetic retinopathy is calculated as above, if the threshold is set to 0.8, 0.912>0.8, and the final determination result in this case is the diabetic retinopathy; 0.352<0.8, and the final judgment result of the condition is macular degeneration;
when the final judgment result is output, the whole judgment process is output;
specifically, when the final analysis result is output, the lesion status, the lesion classification result, and the merged analysis result of each region are output, and if it is determined that glaucoma and diabetic retinopathy are present and retinal periphery degeneration is present, the final analysis result is: optic disc atrophy, cup size is large, variable zones are visible in the periretinal part, and bleeding and exudation of macula and posterior pole are suspected to be glaucoma and diabetic retinopathy.
Referring to fig. 2, a third embodiment of the present invention is:
a fundus super-wide-angle image analysis terminal 1 comprises a processor 2, a memory 3 and a computer program which is stored on the memory 3 and can run on the processor 2, wherein the processor 2 realizes the steps of the first embodiment or the second embodiment when executing the computer program.
In summary, the present invention provides a fundus super-wide angle image analysis method and a terminal, which reduce single input amount and can simultaneously analyze multiple regional images by analyzing multiple regional images obtained by segmenting a fundus super-wide angle image according to fundus tissues, thereby greatly improving the speed of image analysis; automatically marking the fundus tissues by training an AI model so as to divide images according to the marks, setting the threshold value of the marks in a matching manner, judging that the marks are wrong if the marking result exceeds the threshold value, and re-marking the fundus tissues, so that the accuracy of marking the fundus tissues is ensured, and a foundation is laid for subsequent analysis; the segmented image is subjected to lesion classification through the image classification model, the shape of the segmented image is obtained through the image semantic segmentation model, not only can the lesion condition be obtained, but also the lesion range can be obtained, and more reliable reference is provided for determination of subsequent lesions.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (10)

1. A fundus ultra-wide angle image analysis method is characterized by comprising the following steps:
s1, cutting the fundus super-wide-angle image according to different fundus tissues to obtain a region image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue;
and S3, integrating the area analysis result of each fundus tissue, and obtaining and outputting a final analysis result.
2. The method for analyzing the ultra-wide-angle image of the fundus of claim 1, further comprising, before S1:
training an AI model to automatically mark different fundus tissues;
the S1 specifically includes:
s11, inputting the fundus super-wide-angle image into the AI model to obtain fundus super-wide-angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, if so, returning to S11; otherwise, cutting the fundus super-wide-angle image according to the mark.
3. The method for analyzing the ultra-wide-angle image of the fundus oculi of claim 1, wherein the fundus tissues include the optic disc, the macula lutea, and the middle and peripheral parts of the retina;
the obtaining of the area image corresponding to each fundus tissue in S1 includes:
obtaining a first area image including a optic disc, a second area image including a macula lutea, and a third area image including a middle periphery and a peripheral portion of a retina;
classifying the first area image, the second area image and the third area image according to lesions by using an image classification model to obtain a lesion classification result of each area image;
performing semantic segmentation on the first area image, the second area image and the third area image respectively by using an image semantic segmentation model to obtain the shape of the fundus tissue in each area image;
and obtaining the region analysis result of each region image according to the lesion classification result and the shape of the fundus tissue.
4. The method for analyzing the ultra-wide-angle image of the fundus of the eye according to claim 3, wherein the step S3 is specifically as follows:
and inputting the lesion classification result and the shape of the fundus tissue of each region image into a preset logic model to obtain and output a final analysis result.
5. The method for analyzing the ultra-wide-angle image of the fundus according to claims 2-3, wherein the step S12 is specifically;
judging whether the position of the mark of the macula lutea is on the left side or the right side of the level of the mark of the optic disc, if not, returning to the step S11;
determining whether the distance between the position of the macular marker and the position of the optic disc marker exceeds ten times the diameter of the optic disc marker, and if so, returning to S11;
judging whether the mark of the middle circumference part of the retina is not the mark of the equator part, if not, returning to the step S11
And if the position of the macular marking is on the left side or the right side of the level of the marking position of the optic disc, the distance between the position of the macular marking and the position of the marking of the optic disc is less than ten times the diameter of the marking of the optic disc, and the mark of the middle periphery of the retina is outside the mark of the equator part, cutting the fundus super-wide-angle image according to the mark.
6. An eye fundus super-wide-angle image analysis terminal, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, characterized in that the processor implements the following steps when executing the computer program:
s1, cutting the fundus super-wide-angle image according to different fundus tissues to obtain a region image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the area image corresponding to each fundus tissue to obtain the area analysis result of each fundus tissue;
and S3, integrating the area analysis result of each fundus tissue, and obtaining and outputting a final analysis result.
7. The fundus super-wide-angle image analysis terminal according to claim 6, further comprising, before said S1:
training an AI model to automatically mark different fundus tissues;
the S1 specifically includes:
s11, inputting the fundus super-wide-angle image into the AI model to obtain fundus super-wide-angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, if so, returning to S11; otherwise, cutting the fundus super-wide-angle image according to the mark.
8. The ultra-wide-angle fundus image analysis terminal according to claim 6, wherein the fundus tissues comprise the optic disc, macula lutea, and the middle and peripheral parts of retina;
the obtaining of the area image corresponding to each fundus tissue in S1 includes:
obtaining a first area image including a optic disc, a second area image including a macula lutea, and a third area image including a middle periphery and a peripheral portion of a retina;
classifying the first area image, the second area image and the third area image according to lesions by using an image classification model to obtain a lesion classification result of each area image;
performing semantic segmentation on the first area image, the second area image and the third area image respectively by using an image semantic segmentation model to obtain the shape of the fundus tissue in each area image;
and obtaining the region analysis result of each region image according to the lesion classification result and the shape of the fundus tissue.
9. The fundus super-wide-angle image analysis terminal according to claim 8, wherein the S3 specifically is:
and inputting the lesion classification result and the shape of the fundus tissue of each region image into a preset logic model to obtain and output a final analysis result.
10. The fundus super-wide-angle image analysis terminal according to claims 7-8, wherein said S12 is specifically;
judging whether the position of the mark of the macula lutea is on the left side or the right side of the level of the mark of the optic disc, if not, returning to the step S11;
determining whether the distance between the position of the macular marker and the position of the optic disc marker exceeds ten times the diameter of the optic disc marker, and if so, returning to S11;
judging whether the mark of the middle circumference part of the retina is not the mark of the equator part, if not, returning to S11;
and if the position of the macular marking is on the left side or the right side of the level of the marking position of the optic disc, the distance between the position of the macular marking and the position of the marking of the optic disc is less than ten times the diameter of the marking of the optic disc, and the mark of the middle periphery of the retina is outside the mark of the equator part, cutting the fundus super-wide-angle image according to the mark.
CN202010565477.XA 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom Active CN111583261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010565477.XA CN111583261B (en) 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010565477.XA CN111583261B (en) 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom

Publications (2)

Publication Number Publication Date
CN111583261A true CN111583261A (en) 2020-08-25
CN111583261B CN111583261B (en) 2023-08-18

Family

ID=72127560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010565477.XA Active CN111583261B (en) 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom

Country Status (1)

Country Link
CN (1) CN111583261B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768461A (en) * 2021-09-14 2021-12-10 北京鹰瞳科技发展股份有限公司 Fundus image analysis method and system and electronic equipment
CN116309549A (en) * 2023-05-11 2023-06-23 爱尔眼科医院集团股份有限公司 Fundus region detection method, fundus region detection device, fundus region detection equipment and readable storage medium
CN117689893A (en) * 2024-02-04 2024-03-12 智眸医疗(深圳)有限公司 Laser scanning ultra-wide-angle fundus image semantic segmentation method, system and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018116321A2 (en) * 2016-12-21 2018-06-28 Braviithi Technologies Private Limited Retinal fundus image processing method
CN108717696A (en) * 2018-05-16 2018-10-30 上海鹰瞳医疗科技有限公司 Macula lutea image detection method and equipment
CN109472781A (en) * 2018-10-29 2019-03-15 电子科技大学 A kind of diabetic retinopathy detection system based on serial structure segmentation
CN110327013A (en) * 2019-05-21 2019-10-15 北京至真互联网技术有限公司 Eye fundus image detection method, device and equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018116321A2 (en) * 2016-12-21 2018-06-28 Braviithi Technologies Private Limited Retinal fundus image processing method
CN108717696A (en) * 2018-05-16 2018-10-30 上海鹰瞳医疗科技有限公司 Macula lutea image detection method and equipment
CN109472781A (en) * 2018-10-29 2019-03-15 电子科技大学 A kind of diabetic retinopathy detection system based on serial structure segmentation
CN110327013A (en) * 2019-05-21 2019-10-15 北京至真互联网技术有限公司 Eye fundus image detection method, device and equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768461A (en) * 2021-09-14 2021-12-10 北京鹰瞳科技发展股份有限公司 Fundus image analysis method and system and electronic equipment
CN113768461B (en) * 2021-09-14 2024-03-22 北京鹰瞳科技发展股份有限公司 Fundus image analysis method, fundus image analysis system and electronic equipment
CN116309549A (en) * 2023-05-11 2023-06-23 爱尔眼科医院集团股份有限公司 Fundus region detection method, fundus region detection device, fundus region detection equipment and readable storage medium
CN116309549B (en) * 2023-05-11 2023-10-03 爱尔眼科医院集团股份有限公司 Fundus region detection method, fundus region detection device, fundus region detection equipment and readable storage medium
CN117689893A (en) * 2024-02-04 2024-03-12 智眸医疗(深圳)有限公司 Laser scanning ultra-wide-angle fundus image semantic segmentation method, system and terminal

Also Published As

Publication number Publication date
CN111583261B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
Asiri et al. Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey
WO2021068523A1 (en) Method and apparatus for positioning macular center of eye fundus image, electronic device, and storage medium
CN111583261B (en) Method and terminal for analyzing ultra-wide angle image of eye bottom
Gao et al. Automatic feature learning to grade nuclear cataracts based on deep learning
WO2021082691A1 (en) Segmentation method and apparatus for lesion area of eye oct image, and terminal device
Sarathi et al. Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images
Xiong et al. An approach to evaluate blurriness in retinal images with vitreous opacity for cataract diagnosis
CN107330449A (en) A kind of BDR sign detection method and device
CN108615051A (en) Diabetic retina image classification method based on deep learning and system
Reza et al. Diagnosis of diabetic retinopathy: automatic extraction of optic disc and exudates from retinal images using marker-controlled watershed transformation
Zong et al. U-net based method for automatic hard exudates segmentation in fundus images using inception module and residual connection
CN110084803A (en) Eye fundus image method for evaluating quality based on human visual system
Sedai et al. Multi-stage segmentation of the fovea in retinal fundus images using fully convolutional neural networks
CN113066066A (en) Retinal abnormality analysis method and device
US20210383262A1 (en) System and method for evaluating a performance of explainability methods used with artificial neural networks
Zheng et al. Five-category intelligent auxiliary diagnosis model of common fundus diseases based on fundus images
CN111178420A (en) Coronary segment labeling method and system on two-dimensional contrast image
Wang et al. Accurate disease detection quantification of iris based retinal images using random implication image classifier technique
CN109978796A (en) Optical fundus blood vessel Picture Generation Method, device and storage medium
CN106446805A (en) Segmentation method and system for optic cup in eye ground photo
Yadav et al. Automatic Cataract Severity Detection and Grading Using Deep Learning
Kanakaprabha et al. Diabetic Retinopathy Detection Using Deep Learning Models
Soliz et al. Computer-aided methods for quantitative assessment of longitudinal changes in retinal images presenting with maculopathy
CN105205813A (en) Cornea arcus senilis automatic detection method
Qomariah et al. Exudate Segmentation for Diabetic Retinopathy Using Modified FCN-8 and Dice Loss.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210318

Address after: Unit G7, block a, floor 1, building 9, Baoneng Science Park, Qinghu village, Qinghu community, Longhua street, Longhua District, Shenzhen, Guangdong 518000

Applicant after: Lin Chen

Applicant after: Huishili medical (Shenzhen) Co.,Ltd.

Address before: Room 604, block 4, ginkgo garden, 296 Shangdu Road, Cangshan District, Fuzhou City, Fujian Province 350000

Applicant before: Lin Chen

Applicant before: Ke Junlong

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220908

Address after: Room 804, Building 3A, Qiaoxiang Mansion, Qiaoxiang Road, Futian District, Shenzhen, Guangdong 518000

Applicant after: Wisdom Medical (Shenzhen) Co.,Ltd.

Applicant after: Lin Chen

Address before: Unit G7, block a, floor 1, building 9, Baoneng Science Park, Qinghu village, Qinghu community, Longhua street, Longhua District, Shenzhen, Guangdong 518000

Applicant before: Lin Chen

Applicant before: Huishili medical (Shenzhen) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant