CN111583261B - Method and terminal for analyzing ultra-wide angle image of eye bottom - Google Patents

Method and terminal for analyzing ultra-wide angle image of eye bottom Download PDF

Info

Publication number
CN111583261B
CN111583261B CN202010565477.XA CN202010565477A CN111583261B CN 111583261 B CN111583261 B CN 111583261B CN 202010565477 A CN202010565477 A CN 202010565477A CN 111583261 B CN111583261 B CN 111583261B
Authority
CN
China
Prior art keywords
fundus
image
mark
ultra
wide angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010565477.XA
Other languages
Chinese (zh)
Other versions
CN111583261A (en
Inventor
林晨
喻碧莺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lin Chen
Wisdom Medical Shenzhen Co ltd
Original Assignee
Wisdom Medical Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wisdom Medical Shenzhen Co ltd filed Critical Wisdom Medical Shenzhen Co ltd
Priority to CN202010565477.XA priority Critical patent/CN111583261B/en
Publication of CN111583261A publication Critical patent/CN111583261A/en
Application granted granted Critical
Publication of CN111583261B publication Critical patent/CN111583261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a fundus ultra-wide angle image analysis method and a terminal, wherein fundus ultra-wide angle images are cut according to different fundus tissues to obtain area images corresponding to each fundus tissue; respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue; integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result; according to the invention, the ultra-wide angle image of the eye bottom is segmented and analyzed, so that the analysis amount of each time is reduced, a plurality of segmented images can be simultaneously and correspondingly analyzed, the analysis speed is improved, and finally, the analysis results of the segmented regional images are integrated and analyzed to obtain a final analysis result, the whole information is prevented from being lost after the segmented images are analyzed, the accurate analysis result can be obtained, and a better reference is provided for doctors.

Description

Method and terminal for analyzing ultra-wide angle image of eye bottom
Technical Field
The invention relates to the field of image analysis, in particular to a fundus ultra-wide angle image analysis method and a terminal.
Background
In the field of fundus image analysis, the prior art generally puts the whole fundus image into an AI model for analysis, and outputs the disease probability of each ophthalmic disease corresponding to the fundus image, but the image analysis has poor pertinence, wastes calculation resources, and is easy to generate program errors for some pictures with poor quality and excessive interference factors due to irregular acquisition, which is obvious in the analysis of fundus ultra-wide angle images; moreover, the black box effect of the existing method is obvious, only a part of the disease probability of the appointed disease is given, the decision process of the disease probability can not be obtained by explaining why the corresponding inference is obtained, the description of the physical sign and the phenomenon is deficient, the report can not be displayed according to the clinic film reading habit of a doctor, and the method has larger limitation in the practical clinic application to auxiliary diagnosis.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the fundus ultra-wide angle image analysis method is provided, and the analysis speed of images is improved.
In order to solve the technical problems, the invention adopts a technical scheme that:
a fundus ultra-wide angle image analysis method comprises the following steps:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
In order to solve the technical problems, the invention adopts another technical scheme that:
a fundus ultra-wide angle image analysis terminal, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the following steps when executing the computer program:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
The invention has the beneficial effects that: cutting the fundus ultra-wide angle image according to different fundus tissues, respectively analyzing more than one cut image, obtaining a specific fundus tissue condition, namely a regional analysis result, and integrating different fundus tissue conditions to obtain a final analysis result; the image is cut and then analyzed, compared with the whole image analysis, the size of the input image is reduced, and a plurality of cut images can be analyzed at the same time, so that the analysis speed of the images is improved; meanwhile, the specific conditions of different tissues of the fundus can be acquired and integrated and analyzed according to the needs, the specific conditions of the fundus tissues and the analysis results of the combination of the conditions of the fundus tissues can be obtained, and the reference with higher clinical practicability is provided.
Drawings
FIG. 1 is a flow chart showing the steps of a method for analyzing an ultra wide angle fundus image according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a fundus ultra-wide angle image analysis terminal according to an embodiment of the present invention;
description of the reference numerals:
1. a fundus ultra-wide angle image analysis terminal; 2. a processor; 3. a memory.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, a method for analyzing an ultra-wide angle fundus image includes the steps of:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
From the above description, the beneficial effects of the invention are as follows: cutting the fundus ultra-wide angle image according to different fundus tissues, respectively analyzing more than one cut image, obtaining a specific fundus tissue condition, namely a regional analysis result, and integrating different fundus tissue conditions to obtain a final analysis result; the image is cut and then analyzed, compared with the whole image analysis, the size of the input image is reduced, and a plurality of cut images can be analyzed at the same time, so that the analysis speed of the images is improved; meanwhile, the specific conditions of different tissues of the fundus can be acquired and integrated and analyzed according to the needs, the specific conditions of the fundus tissues and the analysis results of the combination of the conditions of the fundus tissues can be obtained, and the reference with higher clinical practicability is provided.
Further, before S1, the method further includes:
training an AI model to automatically mark different fundus tissues;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark.
According to the description, the AI model is trained, different fundus tissues are automatically marked by the AI model, the marking result can be judged according to the preset threshold, if the corresponding condition is not met, the marking can be carried out again, the accuracy of the marking of the AI model is ensured, the efficiency of marking pictures is improved, the images after marking are cut according to the images after marking, and the integrity of the fundus tissues in the images after cutting can be ensured.
Further, the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model to obtain a lesion classification result of each region image;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
and obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape.
From the above description, the integrity of the video disc, the macula lutea and the middle and peripheral parts of the retina in the cut image is ensured, the subsequent analysis of the image is facilitated, the image is classified by an image classification model and is segmented by a semantic segmentation model, the shape of the fundus tissue can be obtained while the lesion probability is obtained, the subsequent formation of a report with high readability is facilitated, and a more detailed reference is provided for the diagnosis of doctors.
Further, the step S3 specifically includes:
and inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result.
As can be seen from the above description, the lesion classification result and the fundus tissue shape of each regional image are input into a preset logic model, so that the analysis results of the divided multiple images can be combined and analyzed, the local analysis results can be integrated to obtain the whole analysis result while the local analysis is accurate, the detailed analysis process can be seen in the logic model, and the clear analysis process can provide more reference data for doctors.
Further, the step S12 is specifically that;
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the peripheral part of the retina are out of the marks of the equatorial part, if not, returning to S11
And if the position of the macula lutea mark is on the left side or the right side of the horizontal direction of the position of the optical disc mark, the distance between the position of the macula lutea mark and the position of the optical disc mark is smaller than ten times of the diameter of the optical disc mark, and the mark on the middle periphery of the retina is outside the mark on the equatorial part, cutting the fundus ultra-wide angle image according to the mark.
From the above description, it can be seen that the specific standard determination AI model is set to verify the determination result of the bottom ultra-wide angle image, and if the determination result does not meet the threshold value, the marking is performed again, so that the marking result with obvious error can be eliminated, and the accuracy of the marking result can be improved.
Referring to fig. 2, a terminal for analyzing an ultra wide angle image of a fundus oculi includes a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the following steps when executing the computer program:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
The invention has the beneficial effects that: cutting the fundus ultra-wide angle image according to different fundus tissues, respectively analyzing more than one cut image, obtaining a specific fundus tissue condition, namely a regional analysis result, and integrating different fundus tissue conditions to obtain a final analysis result; the image is cut and then analyzed, compared with the whole image analysis, the size of the input image is reduced, and a plurality of cut images can be analyzed at the same time, so that the analysis speed of the images is improved; meanwhile, the specific conditions of different tissues of the fundus can be acquired and integrated and analyzed according to the needs, the specific conditions of the fundus tissues and the analysis results of the combination of the conditions of the fundus tissues can be obtained, and the reference with higher clinical practicability is provided.
Further, before S1, the method further includes:
training an AI model to automatically mark different fundus tissues;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark.
According to the description, the AI model is trained, different fundus tissues are automatically marked by the AI model, the marking result can be judged according to the preset threshold, if the corresponding condition is not met, the marking can be carried out again, the accuracy of the marking of the AI model is ensured, the efficiency of marking pictures is improved, the images after marking are cut according to the images after marking, and the integrity of the fundus tissues in the images after cutting can be ensured.
Further, the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model to obtain a lesion classification result of each region image;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
and obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape.
From the above description, the integrity of the video disc, the macula lutea and the middle and peripheral parts of the retina in the cut image is ensured, the subsequent analysis of the image is facilitated, the image is classified by an image classification model and is segmented by a semantic segmentation model, the shape of the fundus tissue can be obtained while the lesion probability is obtained, the subsequent formation of a report with high readability is facilitated, and a more detailed reference is provided for the diagnosis of doctors.
Further, the step S3 specifically includes:
and inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result.
As can be seen from the above description, the lesion classification result and the fundus tissue shape of each regional image are input into a preset logic model, so that the analysis results of the divided multiple images can be combined and analyzed, the local analysis results can be integrated to obtain the whole analysis result while the local analysis is accurate, the detailed analysis process can be seen in the logic model, and the clear analysis process can provide more reference data for doctors.
Further, the step S12 is specifically that;
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the peripheral part of the retina are out of the marks of the equatorial part, if not, returning to S11
And if the position of the macula lutea mark is on the left side or the right side of the horizontal direction of the position of the optical disc mark, the distance between the position of the macula lutea mark and the position of the optical disc mark is smaller than ten times of the diameter of the optical disc mark, and the mark on the middle periphery of the retina is outside the mark on the equatorial part, cutting the fundus ultra-wide angle image according to the mark.
From the above description, it can be seen that the specific standard determination AI model is set to verify the determination result of the bottom ultra-wide angle image, and if the determination result does not meet the threshold value, the marking is performed again, so that the marking result with obvious errors can be eliminated, and the accuracy of the marking result is improved.
Referring to fig. 1, a first embodiment of the present invention is as follows:
a fundus ultra-wide angle image analysis method specifically comprises the following steps:
training an AI model to automatically mark different fundus tissues;
in an alternative implementation manner, the marking software (such as LableME, or independently developed marking software) is used for carrying out boundary tracing on the regions of eyelashes, eyelids, instrument frames and the like in a large number of fundus ultra-wide angle images, mask images generated by tracing are input into an AI model (which can be FCN, fully Convolutional Network, a full convolution neural network; U-NET, U-NET++ and other image semantic segmentation models) for training, and the trained model can automatically mark the regions of the eyelashes, eyelids, instrument frames and the like and can directly cut the marked instrument frames;
in an alternative embodiment, marking the optic disc, the macula and the vortex vein by marking software, inputting an image generated by marking into an AI model (image segmentation model such as FCN, U-NET and the like) for training, setting a marking range of other areas based on the position of the optic disc, the macula or the vortex vein, and marking different fundus tissues by the AI model according to the training result and the set position of the other areas relative to the optic disc, the macula or the vortex vein;
for example, determining the distance between the macula and the optic disc according to the positions of the marked macula and the optic disc, and taking the macula as the center, and taking the two to four times of the distance as the radius as a circle, wherein the circle is the rear pole part; determining an equatorial portion according to the position of the vortex vein, wherein the outside of the equatorial portion is the peripheral portion of the retina;
specifically, the marker range of the macula is determined through the fovea of the macula;
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
wherein the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark;
the method specifically comprises the following steps of;
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the peripheral part of the retina are out of the marks of the equatorial part, if not, returning to S11
Cutting the fundus ultra-wide angle image according to the marks if the position of the marks of the macula is on the horizontal left side or the right side of the position of the marks of the optic disc, the distance between the position of the marks of the macula and the position of the marks of the optic disc is less than ten times the diameter of the marks of the optic disc, and the marks of the middle and peripheral parts of the retina are outside the marks of the equatorial part;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
The second embodiment of the invention is as follows:
an analysis method of fundus ultra-wide angle image is different from the first embodiment in that:
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model (such as CNN, convolutional Neural Networks, convolutional neural network) to obtain a lesion classification result of each region image;
in an alternative embodiment, the lesion condition and the lesion classification result of each regional image are obtained by using an image classification model;
specifically, the image classification model is obtained through training, and the pathological changes of different fundus tissues in a large number of fundus ultra-wide angle images are marked, wherein the pathological changes comprise normal, exudation, hemorrhage, atrophy and degeneration, the pathological changes corresponding to the pathological changes of different fundus tissues are marked, the marked results are input into the CNN model for training, and the trained CNN model can be used for classifying pathological changes or marking pathological changes according to the input images;
the image classification model can also be effective NET, YOLO and the like;
the method comprises the following steps of: pathological conditions, optic disc atrophy; lesions, glaucoma;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape;
specifically, for the first region image, determining the pathological change condition of the optic disc, namely normal, exudation, hemorrhage, atrophy or degeneration through an image classification model, and acquiring the range of the optic cup and the optic disc through a semantic segmentation model; determining the lesion condition of the macula lutea for the second region image through an image classification model, and acquiring the range of the macula lutea through a semantic segmentation model; determining whether degeneration exists in the peripheral part of the retina for the third region image through an image classification model;
the step S3 is specifically as follows:
inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result;
specifically, the lesion classification results of a plurality of fundus tissues and the shape of the fundus tissues are subjected to combined analysis, or the whole image is subjected to lesion classification, the combined analysis result and the whole image lesion classification result are input into a logic model (such as a logistic model, a mixed linear model and the like) for judgment, a decision tree result is formed, and finally a conclusion is generated;
specifically, for the first region image, calculating a cup-disc ratio according to the range of the optic cup and the optic disc, and if the cup-disc ratio exceeds a threshold value and the classification result of the image classification model is optic disc atrophy, the first region image analysis result is that the possibility of glaucoma is high; if the cup-disk ratio does not exceed the threshold value, but the classification result of the image classification model is optic disk atrophy, the first region image analysis result is that glaucoma possibility exists, and the cup-disk condition is checked manually; if the cup-disc ratio exceeds a threshold value and the classification result of the image classification model is that the video disc is normal, the first area image analysis result is that the video disc is normal;
in an alternative embodiment, the fundus ultra-wide angle image is cut to obtain a fourth area image including a posterior pole, and a lesion condition and a lesion category of the posterior pole are obtained through the image classification model, wherein the lesion condition is bleeding and exudation, the lesion category is diabetic retinopathy and has the probability of 0.9, the lesion condition of macula is bleeding and exudation, and the lesion category is macula and has the probability of 0.94; because the two diseases are rarely generated at the same time under the general condition, according to a preset logic model, in the diabetic retinopathy, the macular area lesion weight is 0.3, the posterior pole lesion weight is 0.7, and 0.94×0.3+0.9×0.7=0.912 is calculated, and the final judgment result is diabetic retinopathy; if the lesion type of the posterior pole is diabetic retinopathy and the probability is 0.1, calculating 0.94×0.3+0.1×0.7=0.352, and finally judging that the result is macular lesion;
specifically, a corresponding threshold value can be set to determine a final judgment result, the threshold value can be set within a range of 0.5-0.8 according to the model calculation result, if the threshold value is exceeded, the final judgment result is considered as the final judgment result, the diabetic retinopathy is calculated as above, if the threshold value is set to be 0.8, the final judgment result is 0.912> 0.8; 0.352<0.8, the final judgment result of this case is maculopathy;
outputting the whole judging process when outputting the final judging result;
specifically, when the final analysis result is output, the lesion condition of each region, the lesion classification result and the combined analysis result of each region are output, if the above is determined to be glaucoma and diabetic retinopathy and the peripheral portion of the retina is denatured, the final analysis result is: optic disc atrophy, a relatively large cupdisc, a visible degeneration area in the peripheral portion of the retina, bleeding and exudation in the macula and posterior pole, suspected glaucoma and diabetic retinopathy.
Referring to fig. 2, a third embodiment of the present invention is as follows:
the fundus ultra-wide angle image analysis terminal 1 comprises a processor 2, a memory 3 and a computer program stored in the memory 3 and capable of running on the processor 2, wherein the processor 2 realizes the steps in the first or second embodiment when executing the computer program.
In summary, the invention provides a method and a terminal for analyzing an ultra-wide angle image of a fundus, which analyze a plurality of regional images obtained by dividing the ultra-wide angle image of the fundus according to fundus tissues, reduce single input quantity, analyze the plurality of regional images at the same time, and greatly improve the speed of image analysis; through training an AI model, automatic marking is carried out on fundus tissues so that image segmentation can be carried out according to the marking, a threshold value of the marking is matched, if a marking result exceeds the threshold value, marking errors are judged to be carried out for re-marking, the marking accuracy of fundus tissues is ensured, and a foundation is laid for subsequent analysis; the segmented images are subjected to lesion classification through the image classification model, and the shape of the segmented images is acquired through the image semantic segmentation model, so that not only can the lesion condition be acquired, but also the range of the lesion can be acquired, more reliable references are provided for the determination of the subsequent lesions, and when the conclusion is finally output, the judgment result of the lesion is output, the lesion condition and the lesion range are also output, the diagnosis report made by a doctor is more close to, the daily diagnosis habit of the doctor is met, and the conclusion which is more detailed and has high referenceis provided.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (8)

1. The fundus ultra-wide angle image analysis method is characterized by comprising the following steps:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
s3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result;
the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
s1 is preceded by: carrying out boundary tracing on the areas such as the eyelashes, the eyelids and the instrument rims in a large number of fundus ultra-wide angle images through marking software, inputting mask images generated by marking into an AI model for training, and enabling the trained model to automatically mark the areas of the eyelashes, the eyelids and the instrument rims and directly cut the marked instrument rims;
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
the step S2 is specifically as follows:
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model to obtain a lesion classification result of each region image; wherein the lesion classification result comprises a probability;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape;
the step S3 is specifically as follows:
combining and analyzing lesion classification results of a plurality of fundus tissues and fundus tissue shapes, or performing lesion classification on the whole graph, inputting the result of the combined analysis and the result of the lesion classification on the whole graph into a logic model for judgment to form a decision tree result, and finally generating a conclusion;
the logic model comprises weights of a plurality of fundus tissues, and a decision tree result is formed according to the weights and probability calculation;
and outputting the lesion condition of each region, the lesion classification result and the combined analysis result of each region when the final analysis result is output.
2. The method for analyzing an ultra-wide-angle fundus image according to claim 1, wherein prior to S1, further comprising:
training an AI model to automatically mark different fundus tissues;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark.
3. The method for analyzing an ultra-wide angle fundus image according to claim 1, wherein the step S3 is specifically:
and inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result.
4. The method for analyzing an ultra-wide angle fundus image according to any one of claims 1 to 2, wherein S12 specifically comprises:
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the middle and peripheral parts of the retina are out of the marks of the equatorial part, and if not, returning to S11;
and if the position of the macula lutea mark is on the left side or the right side of the horizontal direction of the position of the optical disc mark, the distance between the position of the macula lutea mark and the position of the optical disc mark is smaller than ten times of the diameter of the optical disc mark, and the mark on the middle periphery of the retina is outside the mark on the equatorial part, cutting the fundus ultra-wide angle image according to the mark.
5. A fundus ultra-wide angle image analysis terminal, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the following steps when executing the computer program:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
s3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result;
the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
s1 is preceded by: carrying out boundary tracing on the areas such as the eyelashes, the eyelids and the instrument rims in a large number of fundus ultra-wide angle images through marking software, inputting mask images generated by marking into an AI model for training, and enabling the trained model to automatically mark the areas of the eyelashes, the eyelids and the instrument rims and directly cut the marked instrument rims;
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
the step S2 is specifically as follows:
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model to obtain a lesion classification result of each region image; wherein the lesion classification result comprises a probability;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape;
the step S3 is specifically as follows:
combining and analyzing lesion classification results of a plurality of fundus tissues and fundus tissue shapes, or performing lesion classification on the whole graph, inputting the result of the combined analysis and the result of the lesion classification on the whole graph into a logic model for judgment to form a decision tree result, and finally generating a conclusion;
the logic model comprises weights of a plurality of fundus tissues, and a decision tree result is formed according to the weights and probability calculation;
and outputting the lesion condition of each region, the lesion classification result and the combined analysis result of each region when the final analysis result is output.
6. The fundus ultra-wide angle image analysis terminal according to claim 5, wherein prior to S1, further comprising:
training an AI model to automatically mark different fundus tissues;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark.
7. The fundus ultra-wide angle image analysis terminal according to claim 5, wherein S3 is specifically:
and inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result.
8. The fundus ultra-wide angle image analysis terminal according to any one of claims 5 to 6, wherein S12 specifically comprises:
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the middle and peripheral parts of the retina are out of the marks of the equatorial part, and if not, returning to S11;
and if the position of the macula lutea mark is on the left side or the right side of the horizontal direction of the position of the optical disc mark, the distance between the position of the macula lutea mark and the position of the optical disc mark is smaller than ten times of the diameter of the optical disc mark, and the mark on the middle periphery of the retina is outside the mark on the equatorial part, cutting the fundus ultra-wide angle image according to the mark.
CN202010565477.XA 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom Active CN111583261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010565477.XA CN111583261B (en) 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010565477.XA CN111583261B (en) 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom

Publications (2)

Publication Number Publication Date
CN111583261A CN111583261A (en) 2020-08-25
CN111583261B true CN111583261B (en) 2023-08-18

Family

ID=72127560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010565477.XA Active CN111583261B (en) 2020-06-19 2020-06-19 Method and terminal for analyzing ultra-wide angle image of eye bottom

Country Status (1)

Country Link
CN (1) CN111583261B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768461B (en) * 2021-09-14 2024-03-22 北京鹰瞳科技发展股份有限公司 Fundus image analysis method, fundus image analysis system and electronic equipment
CN116309549B (en) * 2023-05-11 2023-10-03 爱尔眼科医院集团股份有限公司 Fundus region detection method, fundus region detection device, fundus region detection equipment and readable storage medium
CN117689893B (en) * 2024-02-04 2024-06-04 智眸医疗(深圳)有限公司 Laser scanning ultra-wide-angle fundus image semantic segmentation method, system and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018116321A2 (en) * 2016-12-21 2018-06-28 Braviithi Technologies Private Limited Retinal fundus image processing method
CN108717696A (en) * 2018-05-16 2018-10-30 上海鹰瞳医疗科技有限公司 Macula lutea image detection method and equipment
CN109472781A (en) * 2018-10-29 2019-03-15 电子科技大学 A kind of diabetic retinopathy detection system based on serial structure segmentation
CN110327013A (en) * 2019-05-21 2019-10-15 北京至真互联网技术有限公司 Eye fundus image detection method, device and equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018116321A2 (en) * 2016-12-21 2018-06-28 Braviithi Technologies Private Limited Retinal fundus image processing method
CN108717696A (en) * 2018-05-16 2018-10-30 上海鹰瞳医疗科技有限公司 Macula lutea image detection method and equipment
CN109472781A (en) * 2018-10-29 2019-03-15 电子科技大学 A kind of diabetic retinopathy detection system based on serial structure segmentation
CN110327013A (en) * 2019-05-21 2019-10-15 北京至真互联网技术有限公司 Eye fundus image detection method, device and equipment and storage medium

Also Published As

Publication number Publication date
CN111583261A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111583261B (en) Method and terminal for analyzing ultra-wide angle image of eye bottom
CN108615051B (en) Diabetic retina image classification method and system based on deep learning
WO2021068523A1 (en) Method and apparatus for positioning macular center of eye fundus image, electronic device, and storage medium
Shah et al. Validation of deep convolutional neural network-based algorithm for detection of diabetic retinopathy–artificial intelligence versus clinician for screening
CN111968120B (en) Tooth CT image segmentation method for 3D multi-feature fusion
CN107330449A (en) A kind of BDR sign detection method and device
Zong et al. U-net based method for automatic hard exudates segmentation in fundus images using inception module and residual connection
Scarpa et al. Multiple-image deep learning analysis for neuropathy detection in corneal nerve images
CN109697719A (en) A kind of image quality measure method, apparatus and computer readable storage medium
CN113158821B (en) Method and device for processing eye detection data based on multiple modes and terminal equipment
CN113066066A (en) Retinal abnormality analysis method and device
CN111028230A (en) Fundus image optic disc and macula lutea positioning detection algorithm based on YOLO-V3
CN111178420A (en) Coronary segment labeling method and system on two-dimensional contrast image
Wang et al. Cataract detection based on ocular B-ultrasound images by collaborative monitoring deep learning
Zheng et al. Five-category intelligent auxiliary diagnosis model of common fundus diseases based on fundus images
Wang et al. Accurate disease detection quantification of iris based retinal images using random implication image classifier technique
CN106446805A (en) Segmentation method and system for optic cup in eye ground photo
Liu et al. Tracking-based deep learning method for temporomandibular joint segmentation
CN105825519B (en) Method and apparatus for processing medical images
CN109978796A (en) Optical fundus blood vessel Picture Generation Method, device and storage medium
CN112734769A (en) Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
Zhu et al. Calculation of ophthalmic diagnostic parameters on a single eye image based on deep neural network
CN103886580A (en) Tumor image processing method
Soliz et al. Computer-aided methods for quantitative assessment of longitudinal changes in retinal images presenting with maculopathy
Thanh et al. A real-time classification of glaucoma from retinal fundus images using AI technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210318

Address after: Unit G7, block a, floor 1, building 9, Baoneng Science Park, Qinghu village, Qinghu community, Longhua street, Longhua District, Shenzhen, Guangdong 518000

Applicant after: Lin Chen

Applicant after: Huishili medical (Shenzhen) Co.,Ltd.

Address before: Room 604, block 4, ginkgo garden, 296 Shangdu Road, Cangshan District, Fuzhou City, Fujian Province 350000

Applicant before: Lin Chen

Applicant before: Ke Junlong

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220908

Address after: Room 804, Building 3A, Qiaoxiang Mansion, Qiaoxiang Road, Futian District, Shenzhen, Guangdong 518000

Applicant after: Wisdom Medical (Shenzhen) Co.,Ltd.

Applicant after: Lin Chen

Address before: Unit G7, block a, floor 1, building 9, Baoneng Science Park, Qinghu village, Qinghu community, Longhua street, Longhua District, Shenzhen, Guangdong 518000

Applicant before: Lin Chen

Applicant before: Huishili medical (Shenzhen) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant